00:00:00.000 Started by upstream project "autotest-per-patch" build number 132351 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.108 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.160 The recommended git tool is: git 00:00:02.160 using credential 00000000-0000-0000-0000-000000000002 00:00:02.162 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.176 Fetching changes from the remote Git repository 00:00:02.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.191 Using shallow fetch with depth 1 00:00:02.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.191 > git --version # timeout=10 00:00:02.207 > git --version # 'git version 2.39.2' 00:00:02.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.088 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.103 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.117 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.117 > git config core.sparsecheckout # timeout=10 00:00:08.130 > git read-tree -mu HEAD # timeout=10 00:00:08.148 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.175 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.175 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.265 [Pipeline] Start of Pipeline 00:00:08.280 [Pipeline] library 00:00:08.281 Loading library shm_lib@master 00:00:08.282 Library shm_lib@master is cached. Copying from home. 00:00:08.297 [Pipeline] node 00:00:23.299 Still waiting to schedule task 00:00:23.300 Waiting for next available executor on ‘vagrant-vm-host’ 00:14:23.219 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:14:23.220 [Pipeline] { 00:14:23.231 [Pipeline] catchError 00:14:23.233 [Pipeline] { 00:14:23.249 [Pipeline] wrap 00:14:23.258 [Pipeline] { 00:14:23.268 [Pipeline] stage 00:14:23.271 [Pipeline] { (Prologue) 00:14:23.291 [Pipeline] echo 00:14:23.293 Node: VM-host-SM4 00:14:23.300 [Pipeline] cleanWs 00:14:23.311 [WS-CLEANUP] Deleting project workspace... 00:14:23.311 [WS-CLEANUP] Deferred wipeout is used... 00:14:23.317 [WS-CLEANUP] done 00:14:23.525 [Pipeline] setCustomBuildProperty 00:14:23.616 [Pipeline] httpRequest 00:14:23.927 [Pipeline] echo 00:14:23.929 Sorcerer 10.211.164.20 is alive 00:14:23.941 [Pipeline] retry 00:14:23.944 [Pipeline] { 00:14:23.959 [Pipeline] httpRequest 00:14:23.964 HttpMethod: GET 00:14:23.965 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:23.965 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:23.966 Response Code: HTTP/1.1 200 OK 00:14:23.967 Success: Status code 200 is in the accepted range: 200,404 00:14:23.967 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:24.113 [Pipeline] } 00:14:24.131 [Pipeline] // retry 00:14:24.139 [Pipeline] sh 00:14:24.422 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:14:24.697 [Pipeline] httpRequest 00:14:25.011 [Pipeline] echo 00:14:25.013 Sorcerer 10.211.164.20 is alive 00:14:25.024 [Pipeline] retry 00:14:25.026 [Pipeline] { 00:14:25.042 [Pipeline] httpRequest 00:14:25.047 HttpMethod: GET 00:14:25.048 URL: http://10.211.164.20/packages/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:25.048 Sending request to url: http://10.211.164.20/packages/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:25.049 Response Code: HTTP/1.1 200 OK 00:14:25.049 Success: Status code 200 is in the accepted range: 200,404 00:14:25.050 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:27.321 [Pipeline] } 00:14:27.339 [Pipeline] // retry 00:14:27.347 [Pipeline] sh 00:14:27.627 + tar --no-same-owner -xf spdk_400f484f7a9b50c2a8ebe6def409514cdbc7140c.tar.gz 00:14:31.021 [Pipeline] sh 00:14:31.329 + git -C spdk log --oneline -n5 00:14:31.329 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:14:31.329 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:14:31.329 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:14:31.329 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:14:31.329 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:14:31.351 [Pipeline] writeFile 00:14:31.367 [Pipeline] sh 00:14:31.643 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:14:31.656 [Pipeline] sh 00:14:31.936 + cat autorun-spdk.conf 00:14:31.936 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:31.936 SPDK_TEST_NVME=1 00:14:31.936 SPDK_TEST_FTL=1 00:14:31.936 SPDK_TEST_ISAL=1 00:14:31.936 SPDK_RUN_ASAN=1 00:14:31.936 SPDK_RUN_UBSAN=1 00:14:31.936 SPDK_TEST_XNVME=1 00:14:31.936 SPDK_TEST_NVME_FDP=1 00:14:31.936 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:31.942 RUN_NIGHTLY=0 00:14:31.944 [Pipeline] } 00:14:31.959 [Pipeline] // stage 00:14:31.976 [Pipeline] stage 00:14:31.979 [Pipeline] { (Run VM) 00:14:31.991 [Pipeline] sh 00:14:32.273 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:14:32.273 + echo 'Start stage prepare_nvme.sh' 00:14:32.273 Start stage prepare_nvme.sh 00:14:32.273 + [[ -n 0 ]] 00:14:32.273 + disk_prefix=ex0 00:14:32.273 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:14:32.273 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:14:32.273 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:14:32.273 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:32.273 ++ SPDK_TEST_NVME=1 00:14:32.273 ++ SPDK_TEST_FTL=1 00:14:32.273 ++ SPDK_TEST_ISAL=1 00:14:32.273 ++ SPDK_RUN_ASAN=1 00:14:32.273 ++ SPDK_RUN_UBSAN=1 00:14:32.273 ++ SPDK_TEST_XNVME=1 00:14:32.273 ++ SPDK_TEST_NVME_FDP=1 00:14:32.273 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:32.273 ++ RUN_NIGHTLY=0 00:14:32.273 + cd /var/jenkins/workspace/nvme-vg-autotest 00:14:32.273 + nvme_files=() 00:14:32.273 + declare -A nvme_files 00:14:32.273 + backend_dir=/var/lib/libvirt/images/backends 00:14:32.273 + nvme_files['nvme.img']=5G 00:14:32.273 + nvme_files['nvme-cmb.img']=5G 00:14:32.273 + nvme_files['nvme-multi0.img']=4G 00:14:32.273 + nvme_files['nvme-multi1.img']=4G 00:14:32.273 + nvme_files['nvme-multi2.img']=4G 00:14:32.273 + nvme_files['nvme-openstack.img']=8G 00:14:32.273 + nvme_files['nvme-zns.img']=5G 00:14:32.273 + (( SPDK_TEST_NVME_PMR == 1 )) 00:14:32.273 + (( SPDK_TEST_FTL == 1 )) 00:14:32.273 + nvme_files["nvme-ftl.img"]=6G 00:14:32.273 + (( SPDK_TEST_NVME_FDP == 1 )) 00:14:32.273 + nvme_files["nvme-fdp.img"]=1G 00:14:32.273 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:14:32.273 + for nvme in "${!nvme_files[@]}" 00:14:32.273 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:14:32.273 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:14:32.273 + for nvme in "${!nvme_files[@]}" 00:14:32.273 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:14:32.273 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:14:32.273 + for nvme in "${!nvme_files[@]}" 00:14:32.274 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:14:32.274 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:14:32.274 + for nvme in "${!nvme_files[@]}" 00:14:32.274 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:14:32.274 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:14:32.274 + for nvme in "${!nvme_files[@]}" 00:14:32.274 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:14:32.274 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:14:32.274 + for nvme in "${!nvme_files[@]}" 00:14:32.274 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:14:32.533 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:14:32.533 + for nvme in "${!nvme_files[@]}" 00:14:32.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:14:32.533 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:14:32.533 + for nvme in "${!nvme_files[@]}" 00:14:32.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:14:32.533 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:14:32.533 + for nvme in "${!nvme_files[@]}" 00:14:32.533 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:14:33.467 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:14:33.467 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:14:33.467 + echo 'End stage prepare_nvme.sh' 00:14:33.467 End stage prepare_nvme.sh 00:14:33.478 [Pipeline] sh 00:14:33.760 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:14:33.760 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:14:33.760 00:14:33.760 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:14:33.760 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:14:33.760 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:14:33.760 HELP=0 00:14:33.760 DRY_RUN=0 00:14:33.760 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:14:33.760 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:14:33.760 NVME_AUTO_CREATE=0 00:14:33.760 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:14:33.760 NVME_CMB=,,,, 00:14:33.760 NVME_PMR=,,,, 00:14:33.760 NVME_ZNS=,,,, 00:14:33.760 NVME_MS=true,,,, 00:14:33.760 NVME_FDP=,,,on, 00:14:33.760 SPDK_VAGRANT_DISTRO=fedora39 00:14:33.760 SPDK_VAGRANT_VMCPU=10 00:14:33.760 SPDK_VAGRANT_VMRAM=12288 00:14:33.760 SPDK_VAGRANT_PROVIDER=libvirt 00:14:33.760 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:14:33.760 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:14:33.760 SPDK_OPENSTACK_NETWORK=0 00:14:33.760 VAGRANT_PACKAGE_BOX=0 00:14:33.760 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:14:33.760 FORCE_DISTRO=true 00:14:33.760 VAGRANT_BOX_VERSION= 00:14:33.760 EXTRA_VAGRANTFILES= 00:14:33.760 NIC_MODEL=e1000 00:14:33.760 00:14:33.760 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:14:33.760 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:14:37.045 Bringing machine 'default' up with 'libvirt' provider... 00:14:37.981 ==> default: Creating image (snapshot of base box volume). 00:14:38.240 ==> default: Creating domain with the following settings... 00:14:38.240 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732086721_4bce1a78ca7b2c1ea9ef 00:14:38.241 ==> default: -- Domain type: kvm 00:14:38.241 ==> default: -- Cpus: 10 00:14:38.241 ==> default: -- Feature: acpi 00:14:38.241 ==> default: -- Feature: apic 00:14:38.241 ==> default: -- Feature: pae 00:14:38.241 ==> default: -- Memory: 12288M 00:14:38.241 ==> default: -- Memory Backing: hugepages: 00:14:38.241 ==> default: -- Management MAC: 00:14:38.241 ==> default: -- Loader: 00:14:38.241 ==> default: -- Nvram: 00:14:38.241 ==> default: -- Base box: spdk/fedora39 00:14:38.241 ==> default: -- Storage pool: default 00:14:38.241 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732086721_4bce1a78ca7b2c1ea9ef.img (20G) 00:14:38.241 ==> default: -- Volume Cache: default 00:14:38.241 ==> default: -- Kernel: 00:14:38.241 ==> default: -- Initrd: 00:14:38.241 ==> default: -- Graphics Type: vnc 00:14:38.241 ==> default: -- Graphics Port: -1 00:14:38.241 ==> default: -- Graphics IP: 127.0.0.1 00:14:38.241 ==> default: -- Graphics Password: Not defined 00:14:38.241 ==> default: -- Video Type: cirrus 00:14:38.241 ==> default: -- Video VRAM: 9216 00:14:38.241 ==> default: -- Sound Type: 00:14:38.241 ==> default: -- Keymap: en-us 00:14:38.241 ==> default: -- TPM Path: 00:14:38.241 ==> default: -- INPUT: type=mouse, bus=ps2 00:14:38.241 ==> default: -- Command line args: 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:14:38.241 ==> default: -> value=-drive, 00:14:38.241 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:14:38.241 ==> default: -> value=-drive, 00:14:38.241 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:14:38.241 ==> default: -> value=-drive, 00:14:38.241 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:38.241 ==> default: -> value=-drive, 00:14:38.241 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:38.241 ==> default: -> value=-drive, 00:14:38.241 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:14:38.241 ==> default: -> value=-drive, 00:14:38.241 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:14:38.241 ==> default: -> value=-device, 00:14:38.241 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:38.241 ==> default: Creating shared folders metadata... 00:14:38.241 ==> default: Starting domain. 00:14:40.144 ==> default: Waiting for domain to get an IP address... 00:14:58.320 ==> default: Waiting for SSH to become available... 00:14:58.320 ==> default: Configuring and enabling network interfaces... 00:15:02.505 default: SSH address: 192.168.121.234:22 00:15:02.505 default: SSH username: vagrant 00:15:02.505 default: SSH auth method: private key 00:15:05.060 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:15:13.195 ==> default: Mounting SSHFS shared folder... 00:15:15.137 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:15:15.137 ==> default: Checking Mount.. 00:15:16.510 ==> default: Folder Successfully Mounted! 00:15:16.510 ==> default: Running provisioner: file... 00:15:17.443 default: ~/.gitconfig => .gitconfig 00:15:18.010 00:15:18.010 SUCCESS! 00:15:18.010 00:15:18.010 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:15:18.010 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:15:18.010 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:15:18.010 00:15:18.018 [Pipeline] } 00:15:18.032 [Pipeline] // stage 00:15:18.040 [Pipeline] dir 00:15:18.041 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:15:18.043 [Pipeline] { 00:15:18.052 [Pipeline] catchError 00:15:18.053 [Pipeline] { 00:15:18.063 [Pipeline] sh 00:15:18.341 + vagrant ssh-config --host vagrant 00:15:18.341 + sed -ne /^Host/,$p 00:15:18.341 + tee ssh_conf 00:15:22.531 Host vagrant 00:15:22.531 HostName 192.168.121.234 00:15:22.531 User vagrant 00:15:22.531 Port 22 00:15:22.531 UserKnownHostsFile /dev/null 00:15:22.531 StrictHostKeyChecking no 00:15:22.531 PasswordAuthentication no 00:15:22.531 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:15:22.531 IdentitiesOnly yes 00:15:22.531 LogLevel FATAL 00:15:22.531 ForwardAgent yes 00:15:22.531 ForwardX11 yes 00:15:22.531 00:15:22.545 [Pipeline] withEnv 00:15:22.547 [Pipeline] { 00:15:22.561 [Pipeline] sh 00:15:22.842 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:15:22.842 source /etc/os-release 00:15:22.842 [[ -e /image.version ]] && img=$(< /image.version) 00:15:22.842 # Minimal, systemd-like check. 00:15:22.842 if [[ -e /.dockerenv ]]; then 00:15:22.842 # Clear garbage from the node's name: 00:15:22.842 # agt-er_autotest_547-896 -> autotest_547-896 00:15:22.842 # $HOSTNAME is the actual container id 00:15:22.842 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:15:22.842 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:15:22.842 # We can assume this is a mount from a host where container is running, 00:15:22.842 # so fetch its hostname to easily identify the target swarm worker. 00:15:22.842 container="$(< /etc/hostname) ($agent)" 00:15:22.842 else 00:15:22.842 # Fallback 00:15:22.842 container=$agent 00:15:22.842 fi 00:15:22.842 fi 00:15:22.842 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:15:22.842 00:15:23.112 [Pipeline] } 00:15:23.126 [Pipeline] // withEnv 00:15:23.133 [Pipeline] setCustomBuildProperty 00:15:23.146 [Pipeline] stage 00:15:23.148 [Pipeline] { (Tests) 00:15:23.162 [Pipeline] sh 00:15:23.442 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:15:23.716 [Pipeline] sh 00:15:23.994 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:15:24.266 [Pipeline] timeout 00:15:24.267 Timeout set to expire in 50 min 00:15:24.269 [Pipeline] { 00:15:24.282 [Pipeline] sh 00:15:24.562 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:15:25.130 HEAD is now at 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:15:25.144 [Pipeline] sh 00:15:25.429 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:15:25.700 [Pipeline] sh 00:15:25.978 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:15:26.252 [Pipeline] sh 00:15:26.530 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:15:26.789 ++ readlink -f spdk_repo 00:15:26.789 + DIR_ROOT=/home/vagrant/spdk_repo 00:15:26.789 + [[ -n /home/vagrant/spdk_repo ]] 00:15:26.789 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:15:26.789 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:15:26.789 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:15:26.789 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:15:26.789 + [[ -d /home/vagrant/spdk_repo/output ]] 00:15:26.789 + [[ nvme-vg-autotest == pkgdep-* ]] 00:15:26.789 + cd /home/vagrant/spdk_repo 00:15:26.789 + source /etc/os-release 00:15:26.789 ++ NAME='Fedora Linux' 00:15:26.789 ++ VERSION='39 (Cloud Edition)' 00:15:26.789 ++ ID=fedora 00:15:26.789 ++ VERSION_ID=39 00:15:26.789 ++ VERSION_CODENAME= 00:15:26.789 ++ PLATFORM_ID=platform:f39 00:15:26.789 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:15:26.789 ++ ANSI_COLOR='0;38;2;60;110;180' 00:15:26.789 ++ LOGO=fedora-logo-icon 00:15:26.789 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:15:26.789 ++ HOME_URL=https://fedoraproject.org/ 00:15:26.789 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:15:26.789 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:15:26.789 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:15:26.789 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:15:26.789 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:15:26.789 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:15:26.789 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:15:26.789 ++ SUPPORT_END=2024-11-12 00:15:26.789 ++ VARIANT='Cloud Edition' 00:15:26.789 ++ VARIANT_ID=cloud 00:15:26.789 + uname -a 00:15:26.789 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:15:26.789 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:27.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:27.612 Hugepages 00:15:27.612 node hugesize free / total 00:15:27.612 node0 1048576kB 0 / 0 00:15:27.612 node0 2048kB 0 / 0 00:15:27.612 00:15:27.612 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:27.612 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:27.612 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:15:27.612 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:15:27.612 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:15:27.612 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:15:27.612 + rm -f /tmp/spdk-ld-path 00:15:27.612 + source autorun-spdk.conf 00:15:27.612 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:27.612 ++ SPDK_TEST_NVME=1 00:15:27.612 ++ SPDK_TEST_FTL=1 00:15:27.612 ++ SPDK_TEST_ISAL=1 00:15:27.612 ++ SPDK_RUN_ASAN=1 00:15:27.612 ++ SPDK_RUN_UBSAN=1 00:15:27.612 ++ SPDK_TEST_XNVME=1 00:15:27.612 ++ SPDK_TEST_NVME_FDP=1 00:15:27.612 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:27.612 ++ RUN_NIGHTLY=0 00:15:27.612 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:15:27.612 + [[ -n '' ]] 00:15:27.612 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:15:27.612 + for M in /var/spdk/build-*-manifest.txt 00:15:27.612 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:15:27.612 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:27.612 + for M in /var/spdk/build-*-manifest.txt 00:15:27.612 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:15:27.612 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:27.612 + for M in /var/spdk/build-*-manifest.txt 00:15:27.612 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:15:27.612 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:15:27.612 ++ uname 00:15:27.612 + [[ Linux == \L\i\n\u\x ]] 00:15:27.612 + sudo dmesg -T 00:15:27.871 + sudo dmesg --clear 00:15:27.871 + dmesg_pid=5301 00:15:27.871 + sudo dmesg -Tw 00:15:27.871 + [[ Fedora Linux == FreeBSD ]] 00:15:27.871 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:27.871 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:27.871 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:15:27.871 + [[ -x /usr/src/fio-static/fio ]] 00:15:27.871 + export FIO_BIN=/usr/src/fio-static/fio 00:15:27.871 + FIO_BIN=/usr/src/fio-static/fio 00:15:27.871 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:15:27.871 + [[ ! -v VFIO_QEMU_BIN ]] 00:15:27.871 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:15:27.871 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:27.871 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:27.871 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:15:27.871 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:27.871 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:27.871 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:27.871 07:12:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:15:27.871 07:12:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:15:27.871 07:12:51 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:15:27.871 07:12:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:15:27.871 07:12:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:15:27.871 07:12:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:15:27.871 07:12:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.871 07:12:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:15:27.871 07:12:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:15:27.871 07:12:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.871 07:12:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.871 07:12:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.871 07:12:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.871 07:12:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.871 07:12:51 -- paths/export.sh@5 -- $ export PATH 00:15:27.871 07:12:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.871 07:12:51 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:15:27.871 07:12:52 -- common/autobuild_common.sh@493 -- $ date +%s 00:15:27.871 07:12:52 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732086772.XXXXXX 00:15:27.871 07:12:52 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732086772.13BBpv 00:15:27.871 07:12:52 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:15:27.871 07:12:52 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:15:27.871 07:12:52 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:15:27.871 07:12:52 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:15:27.871 07:12:52 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:15:27.871 07:12:52 -- common/autobuild_common.sh@509 -- $ get_config_params 00:15:27.871 07:12:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:15:27.871 07:12:52 -- common/autotest_common.sh@10 -- $ set +x 00:15:27.871 07:12:52 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:15:27.871 07:12:52 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:15:27.871 07:12:52 -- pm/common@17 -- $ local monitor 00:15:27.871 07:12:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:27.871 07:12:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:27.871 07:12:52 -- pm/common@25 -- $ sleep 1 00:15:27.871 07:12:52 -- pm/common@21 -- $ date +%s 00:15:27.871 07:12:52 -- pm/common@21 -- $ date +%s 00:15:27.871 07:12:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086772 00:15:27.871 07:12:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086772 00:15:27.871 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086772_collect-vmstat.pm.log 00:15:28.142 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086772_collect-cpu-load.pm.log 00:15:29.093 07:12:53 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:15:29.093 07:12:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:15:29.093 07:12:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:15:29.093 07:12:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:15:29.093 07:12:53 -- spdk/autobuild.sh@16 -- $ date -u 00:15:29.093 Wed Nov 20 07:12:53 AM UTC 2024 00:15:29.093 07:12:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:15:29.093 v25.01-pre-202-g400f484f7 00:15:29.094 07:12:53 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:15:29.094 07:12:53 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:15:29.094 07:12:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:29.094 07:12:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:29.094 07:12:53 -- common/autotest_common.sh@10 -- $ set +x 00:15:29.094 ************************************ 00:15:29.094 START TEST asan 00:15:29.094 ************************************ 00:15:29.094 using asan 00:15:29.094 07:12:53 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:15:29.094 00:15:29.094 real 0m0.000s 00:15:29.094 user 0m0.000s 00:15:29.094 sys 0m0.000s 00:15:29.094 07:12:53 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:15:29.094 ************************************ 00:15:29.094 END TEST asan 00:15:29.094 07:12:53 asan -- common/autotest_common.sh@10 -- $ set +x 00:15:29.094 ************************************ 00:15:29.094 07:12:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:15:29.094 07:12:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:15:29.094 07:12:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:29.094 07:12:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:29.094 07:12:53 -- common/autotest_common.sh@10 -- $ set +x 00:15:29.094 ************************************ 00:15:29.094 START TEST ubsan 00:15:29.094 ************************************ 00:15:29.094 using ubsan 00:15:29.094 07:12:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:15:29.094 00:15:29.094 real 0m0.000s 00:15:29.094 user 0m0.000s 00:15:29.094 sys 0m0.000s 00:15:29.094 07:12:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:15:29.094 07:12:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:15:29.094 ************************************ 00:15:29.094 END TEST ubsan 00:15:29.094 ************************************ 00:15:29.094 07:12:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:15:29.094 07:12:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:15:29.094 07:12:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:15:29.094 07:12:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:15:29.094 07:12:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:15:29.094 07:12:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:15:29.094 07:12:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:15:29.094 07:12:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:15:29.094 07:12:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:15:29.094 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:29.094 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:29.659 Using 'verbs' RDMA provider 00:15:45.944 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:16:00.817 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:16:00.817 Creating mk/config.mk...done. 00:16:00.817 Creating mk/cc.flags.mk...done. 00:16:00.817 Type 'make' to build. 00:16:00.817 07:13:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:16:00.817 07:13:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:16:00.817 07:13:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:16:00.817 07:13:23 -- common/autotest_common.sh@10 -- $ set +x 00:16:00.817 ************************************ 00:16:00.817 START TEST make 00:16:00.817 ************************************ 00:16:00.817 07:13:23 make -- common/autotest_common.sh@1129 -- $ make -j10 00:16:00.817 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:16:00.817 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:16:00.817 meson setup builddir \ 00:16:00.817 -Dwith-libaio=enabled \ 00:16:00.817 -Dwith-liburing=enabled \ 00:16:00.817 -Dwith-libvfn=disabled \ 00:16:00.817 -Dwith-spdk=disabled \ 00:16:00.817 -Dexamples=false \ 00:16:00.817 -Dtests=false \ 00:16:00.817 -Dtools=false && \ 00:16:00.817 meson compile -C builddir && \ 00:16:00.817 cd -) 00:16:00.817 make[1]: Nothing to be done for 'all'. 00:16:02.749 The Meson build system 00:16:02.749 Version: 1.5.0 00:16:02.749 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:16:02.749 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:16:02.749 Build type: native build 00:16:02.749 Project name: xnvme 00:16:02.749 Project version: 0.7.5 00:16:02.749 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:16:02.749 C linker for the host machine: cc ld.bfd 2.40-14 00:16:02.749 Host machine cpu family: x86_64 00:16:02.749 Host machine cpu: x86_64 00:16:02.749 Message: host_machine.system: linux 00:16:02.749 Compiler for C supports arguments -Wno-missing-braces: YES 00:16:02.749 Compiler for C supports arguments -Wno-cast-function-type: YES 00:16:02.749 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:16:02.749 Run-time dependency threads found: YES 00:16:02.749 Has header "setupapi.h" : NO 00:16:02.749 Has header "linux/blkzoned.h" : YES 00:16:02.749 Has header "linux/blkzoned.h" : YES (cached) 00:16:02.749 Has header "libaio.h" : YES 00:16:02.749 Library aio found: YES 00:16:02.749 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:16:02.749 Run-time dependency liburing found: YES 2.2 00:16:02.749 Dependency libvfn skipped: feature with-libvfn disabled 00:16:02.749 Found CMake: /usr/bin/cmake (3.27.7) 00:16:02.749 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:16:02.749 Subproject spdk : skipped: feature with-spdk disabled 00:16:02.749 Run-time dependency appleframeworks found: NO (tried framework) 00:16:02.749 Run-time dependency appleframeworks found: NO (tried framework) 00:16:02.749 Library rt found: YES 00:16:02.749 Checking for function "clock_gettime" with dependency -lrt: YES 00:16:02.749 Configuring xnvme_config.h using configuration 00:16:02.749 Configuring xnvme.spec using configuration 00:16:02.749 Run-time dependency bash-completion found: YES 2.11 00:16:02.749 Message: Bash-completions: /usr/share/bash-completion/completions 00:16:02.749 Program cp found: YES (/usr/bin/cp) 00:16:02.749 Build targets in project: 3 00:16:02.749 00:16:02.749 xnvme 0.7.5 00:16:02.749 00:16:02.749 Subprojects 00:16:02.749 spdk : NO Feature 'with-spdk' disabled 00:16:02.749 00:16:02.749 User defined options 00:16:02.749 examples : false 00:16:02.749 tests : false 00:16:02.749 tools : false 00:16:02.749 with-libaio : enabled 00:16:02.749 with-liburing: enabled 00:16:02.749 with-libvfn : disabled 00:16:02.749 with-spdk : disabled 00:16:02.749 00:16:02.749 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:03.315 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:16:03.315 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:16:03.573 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:16:03.573 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:16:03.573 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:16:03.573 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:16:03.573 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:16:03.573 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:16:03.573 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:16:03.573 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:16:03.573 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:16:03.573 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:16:03.573 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:16:03.573 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:16:03.573 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:16:03.831 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:16:03.831 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:16:03.831 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:16:03.831 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:16:03.832 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:16:03.832 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:16:03.832 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:16:03.832 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:16:03.832 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:16:03.832 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:16:03.832 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:16:03.832 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:16:03.832 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:16:03.832 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:16:03.832 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:16:03.832 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:16:03.832 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:16:03.832 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:16:03.832 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:16:03.832 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:16:03.832 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:16:03.832 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:16:04.090 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:16:04.090 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:16:04.090 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:16:04.090 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:16:04.090 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:16:04.090 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:16:04.090 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:16:04.090 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:16:04.090 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:16:04.090 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:16:04.090 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:16:04.090 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:16:04.090 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:16:04.090 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:16:04.090 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:16:04.090 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:16:04.090 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:16:04.090 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:16:04.090 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:16:04.090 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:16:04.090 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:16:04.090 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:16:04.090 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:16:04.348 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:16:04.348 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:16:04.348 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:16:04.348 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:16:04.348 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:16:04.348 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:16:04.348 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:16:04.348 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:16:04.348 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:16:04.348 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:16:04.348 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:16:04.348 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:16:04.606 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:16:04.606 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:16:04.864 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:16:04.864 [75/76] Linking static target lib/libxnvme.a 00:16:04.864 [76/76] Linking target lib/libxnvme.so.0.7.5 00:16:05.161 INFO: autodetecting backend as ninja 00:16:05.161 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:16:05.161 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:16:15.179 The Meson build system 00:16:15.179 Version: 1.5.0 00:16:15.179 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:16:15.179 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:16:15.179 Build type: native build 00:16:15.179 Program cat found: YES (/usr/bin/cat) 00:16:15.179 Project name: DPDK 00:16:15.179 Project version: 24.03.0 00:16:15.179 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:16:15.179 C linker for the host machine: cc ld.bfd 2.40-14 00:16:15.179 Host machine cpu family: x86_64 00:16:15.179 Host machine cpu: x86_64 00:16:15.179 Message: ## Building in Developer Mode ## 00:16:15.179 Program pkg-config found: YES (/usr/bin/pkg-config) 00:16:15.179 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:16:15.179 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:16:15.179 Program python3 found: YES (/usr/bin/python3) 00:16:15.179 Program cat found: YES (/usr/bin/cat) 00:16:15.179 Compiler for C supports arguments -march=native: YES 00:16:15.179 Checking for size of "void *" : 8 00:16:15.179 Checking for size of "void *" : 8 (cached) 00:16:15.179 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:16:15.179 Library m found: YES 00:16:15.179 Library numa found: YES 00:16:15.179 Has header "numaif.h" : YES 00:16:15.179 Library fdt found: NO 00:16:15.179 Library execinfo found: NO 00:16:15.179 Has header "execinfo.h" : YES 00:16:15.179 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:16:15.179 Run-time dependency libarchive found: NO (tried pkgconfig) 00:16:15.179 Run-time dependency libbsd found: NO (tried pkgconfig) 00:16:15.179 Run-time dependency jansson found: NO (tried pkgconfig) 00:16:15.179 Run-time dependency openssl found: YES 3.1.1 00:16:15.179 Run-time dependency libpcap found: YES 1.10.4 00:16:15.179 Has header "pcap.h" with dependency libpcap: YES 00:16:15.179 Compiler for C supports arguments -Wcast-qual: YES 00:16:15.179 Compiler for C supports arguments -Wdeprecated: YES 00:16:15.179 Compiler for C supports arguments -Wformat: YES 00:16:15.179 Compiler for C supports arguments -Wformat-nonliteral: NO 00:16:15.179 Compiler for C supports arguments -Wformat-security: NO 00:16:15.179 Compiler for C supports arguments -Wmissing-declarations: YES 00:16:15.179 Compiler for C supports arguments -Wmissing-prototypes: YES 00:16:15.179 Compiler for C supports arguments -Wnested-externs: YES 00:16:15.179 Compiler for C supports arguments -Wold-style-definition: YES 00:16:15.179 Compiler for C supports arguments -Wpointer-arith: YES 00:16:15.179 Compiler for C supports arguments -Wsign-compare: YES 00:16:15.179 Compiler for C supports arguments -Wstrict-prototypes: YES 00:16:15.179 Compiler for C supports arguments -Wundef: YES 00:16:15.179 Compiler for C supports arguments -Wwrite-strings: YES 00:16:15.179 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:16:15.179 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:16:15.179 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:16:15.179 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:16:15.179 Program objdump found: YES (/usr/bin/objdump) 00:16:15.179 Compiler for C supports arguments -mavx512f: YES 00:16:15.179 Checking if "AVX512 checking" compiles: YES 00:16:15.179 Fetching value of define "__SSE4_2__" : 1 00:16:15.179 Fetching value of define "__AES__" : 1 00:16:15.179 Fetching value of define "__AVX__" : 1 00:16:15.179 Fetching value of define "__AVX2__" : 1 00:16:15.179 Fetching value of define "__AVX512BW__" : 1 00:16:15.179 Fetching value of define "__AVX512CD__" : 1 00:16:15.179 Fetching value of define "__AVX512DQ__" : 1 00:16:15.179 Fetching value of define "__AVX512F__" : 1 00:16:15.179 Fetching value of define "__AVX512VL__" : 1 00:16:15.179 Fetching value of define "__PCLMUL__" : 1 00:16:15.179 Fetching value of define "__RDRND__" : 1 00:16:15.179 Fetching value of define "__RDSEED__" : 1 00:16:15.179 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:16:15.179 Fetching value of define "__znver1__" : (undefined) 00:16:15.179 Fetching value of define "__znver2__" : (undefined) 00:16:15.179 Fetching value of define "__znver3__" : (undefined) 00:16:15.179 Fetching value of define "__znver4__" : (undefined) 00:16:15.179 Library asan found: YES 00:16:15.179 Compiler for C supports arguments -Wno-format-truncation: YES 00:16:15.179 Message: lib/log: Defining dependency "log" 00:16:15.179 Message: lib/kvargs: Defining dependency "kvargs" 00:16:15.179 Message: lib/telemetry: Defining dependency "telemetry" 00:16:15.179 Library rt found: YES 00:16:15.179 Checking for function "getentropy" : NO 00:16:15.179 Message: lib/eal: Defining dependency "eal" 00:16:15.179 Message: lib/ring: Defining dependency "ring" 00:16:15.179 Message: lib/rcu: Defining dependency "rcu" 00:16:15.179 Message: lib/mempool: Defining dependency "mempool" 00:16:15.179 Message: lib/mbuf: Defining dependency "mbuf" 00:16:15.179 Fetching value of define "__PCLMUL__" : 1 (cached) 00:16:15.179 Fetching value of define "__AVX512F__" : 1 (cached) 00:16:15.179 Fetching value of define "__AVX512BW__" : 1 (cached) 00:16:15.179 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:16:15.179 Fetching value of define "__AVX512VL__" : 1 (cached) 00:16:15.179 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:16:15.179 Compiler for C supports arguments -mpclmul: YES 00:16:15.179 Compiler for C supports arguments -maes: YES 00:16:15.179 Compiler for C supports arguments -mavx512f: YES (cached) 00:16:15.179 Compiler for C supports arguments -mavx512bw: YES 00:16:15.179 Compiler for C supports arguments -mavx512dq: YES 00:16:15.179 Compiler for C supports arguments -mavx512vl: YES 00:16:15.179 Compiler for C supports arguments -mvpclmulqdq: YES 00:16:15.179 Compiler for C supports arguments -mavx2: YES 00:16:15.179 Compiler for C supports arguments -mavx: YES 00:16:15.180 Message: lib/net: Defining dependency "net" 00:16:15.180 Message: lib/meter: Defining dependency "meter" 00:16:15.180 Message: lib/ethdev: Defining dependency "ethdev" 00:16:15.180 Message: lib/pci: Defining dependency "pci" 00:16:15.180 Message: lib/cmdline: Defining dependency "cmdline" 00:16:15.180 Message: lib/hash: Defining dependency "hash" 00:16:15.180 Message: lib/timer: Defining dependency "timer" 00:16:15.180 Message: lib/compressdev: Defining dependency "compressdev" 00:16:15.180 Message: lib/cryptodev: Defining dependency "cryptodev" 00:16:15.180 Message: lib/dmadev: Defining dependency "dmadev" 00:16:15.180 Compiler for C supports arguments -Wno-cast-qual: YES 00:16:15.180 Message: lib/power: Defining dependency "power" 00:16:15.180 Message: lib/reorder: Defining dependency "reorder" 00:16:15.180 Message: lib/security: Defining dependency "security" 00:16:15.180 Has header "linux/userfaultfd.h" : YES 00:16:15.180 Has header "linux/vduse.h" : YES 00:16:15.180 Message: lib/vhost: Defining dependency "vhost" 00:16:15.180 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:16:15.180 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:16:15.180 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:16:15.180 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:16:15.180 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:16:15.180 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:16:15.180 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:16:15.180 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:16:15.180 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:16:15.180 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:16:15.180 Program doxygen found: YES (/usr/local/bin/doxygen) 00:16:15.180 Configuring doxy-api-html.conf using configuration 00:16:15.180 Configuring doxy-api-man.conf using configuration 00:16:15.180 Program mandb found: YES (/usr/bin/mandb) 00:16:15.180 Program sphinx-build found: NO 00:16:15.180 Configuring rte_build_config.h using configuration 00:16:15.180 Message: 00:16:15.180 ================= 00:16:15.180 Applications Enabled 00:16:15.180 ================= 00:16:15.180 00:16:15.180 apps: 00:16:15.180 00:16:15.180 00:16:15.180 Message: 00:16:15.180 ================= 00:16:15.180 Libraries Enabled 00:16:15.180 ================= 00:16:15.180 00:16:15.180 libs: 00:16:15.180 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:16:15.180 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:16:15.180 cryptodev, dmadev, power, reorder, security, vhost, 00:16:15.180 00:16:15.180 Message: 00:16:15.180 =============== 00:16:15.180 Drivers Enabled 00:16:15.180 =============== 00:16:15.180 00:16:15.180 common: 00:16:15.180 00:16:15.180 bus: 00:16:15.180 pci, vdev, 00:16:15.180 mempool: 00:16:15.180 ring, 00:16:15.180 dma: 00:16:15.180 00:16:15.180 net: 00:16:15.180 00:16:15.180 crypto: 00:16:15.180 00:16:15.180 compress: 00:16:15.180 00:16:15.180 vdpa: 00:16:15.180 00:16:15.180 00:16:15.180 Message: 00:16:15.180 ================= 00:16:15.180 Content Skipped 00:16:15.180 ================= 00:16:15.180 00:16:15.180 apps: 00:16:15.180 dumpcap: explicitly disabled via build config 00:16:15.180 graph: explicitly disabled via build config 00:16:15.180 pdump: explicitly disabled via build config 00:16:15.180 proc-info: explicitly disabled via build config 00:16:15.180 test-acl: explicitly disabled via build config 00:16:15.180 test-bbdev: explicitly disabled via build config 00:16:15.180 test-cmdline: explicitly disabled via build config 00:16:15.180 test-compress-perf: explicitly disabled via build config 00:16:15.180 test-crypto-perf: explicitly disabled via build config 00:16:15.180 test-dma-perf: explicitly disabled via build config 00:16:15.180 test-eventdev: explicitly disabled via build config 00:16:15.180 test-fib: explicitly disabled via build config 00:16:15.180 test-flow-perf: explicitly disabled via build config 00:16:15.180 test-gpudev: explicitly disabled via build config 00:16:15.180 test-mldev: explicitly disabled via build config 00:16:15.180 test-pipeline: explicitly disabled via build config 00:16:15.180 test-pmd: explicitly disabled via build config 00:16:15.180 test-regex: explicitly disabled via build config 00:16:15.180 test-sad: explicitly disabled via build config 00:16:15.180 test-security-perf: explicitly disabled via build config 00:16:15.180 00:16:15.180 libs: 00:16:15.180 argparse: explicitly disabled via build config 00:16:15.180 metrics: explicitly disabled via build config 00:16:15.180 acl: explicitly disabled via build config 00:16:15.180 bbdev: explicitly disabled via build config 00:16:15.180 bitratestats: explicitly disabled via build config 00:16:15.180 bpf: explicitly disabled via build config 00:16:15.180 cfgfile: explicitly disabled via build config 00:16:15.180 distributor: explicitly disabled via build config 00:16:15.180 efd: explicitly disabled via build config 00:16:15.180 eventdev: explicitly disabled via build config 00:16:15.180 dispatcher: explicitly disabled via build config 00:16:15.180 gpudev: explicitly disabled via build config 00:16:15.180 gro: explicitly disabled via build config 00:16:15.180 gso: explicitly disabled via build config 00:16:15.180 ip_frag: explicitly disabled via build config 00:16:15.180 jobstats: explicitly disabled via build config 00:16:15.180 latencystats: explicitly disabled via build config 00:16:15.180 lpm: explicitly disabled via build config 00:16:15.180 member: explicitly disabled via build config 00:16:15.180 pcapng: explicitly disabled via build config 00:16:15.180 rawdev: explicitly disabled via build config 00:16:15.180 regexdev: explicitly disabled via build config 00:16:15.180 mldev: explicitly disabled via build config 00:16:15.180 rib: explicitly disabled via build config 00:16:15.180 sched: explicitly disabled via build config 00:16:15.180 stack: explicitly disabled via build config 00:16:15.180 ipsec: explicitly disabled via build config 00:16:15.180 pdcp: explicitly disabled via build config 00:16:15.180 fib: explicitly disabled via build config 00:16:15.180 port: explicitly disabled via build config 00:16:15.180 pdump: explicitly disabled via build config 00:16:15.180 table: explicitly disabled via build config 00:16:15.180 pipeline: explicitly disabled via build config 00:16:15.180 graph: explicitly disabled via build config 00:16:15.180 node: explicitly disabled via build config 00:16:15.180 00:16:15.180 drivers: 00:16:15.180 common/cpt: not in enabled drivers build config 00:16:15.180 common/dpaax: not in enabled drivers build config 00:16:15.180 common/iavf: not in enabled drivers build config 00:16:15.180 common/idpf: not in enabled drivers build config 00:16:15.180 common/ionic: not in enabled drivers build config 00:16:15.180 common/mvep: not in enabled drivers build config 00:16:15.180 common/octeontx: not in enabled drivers build config 00:16:15.180 bus/auxiliary: not in enabled drivers build config 00:16:15.180 bus/cdx: not in enabled drivers build config 00:16:15.180 bus/dpaa: not in enabled drivers build config 00:16:15.180 bus/fslmc: not in enabled drivers build config 00:16:15.180 bus/ifpga: not in enabled drivers build config 00:16:15.180 bus/platform: not in enabled drivers build config 00:16:15.180 bus/uacce: not in enabled drivers build config 00:16:15.180 bus/vmbus: not in enabled drivers build config 00:16:15.180 common/cnxk: not in enabled drivers build config 00:16:15.180 common/mlx5: not in enabled drivers build config 00:16:15.180 common/nfp: not in enabled drivers build config 00:16:15.180 common/nitrox: not in enabled drivers build config 00:16:15.180 common/qat: not in enabled drivers build config 00:16:15.180 common/sfc_efx: not in enabled drivers build config 00:16:15.180 mempool/bucket: not in enabled drivers build config 00:16:15.180 mempool/cnxk: not in enabled drivers build config 00:16:15.180 mempool/dpaa: not in enabled drivers build config 00:16:15.180 mempool/dpaa2: not in enabled drivers build config 00:16:15.180 mempool/octeontx: not in enabled drivers build config 00:16:15.180 mempool/stack: not in enabled drivers build config 00:16:15.180 dma/cnxk: not in enabled drivers build config 00:16:15.180 dma/dpaa: not in enabled drivers build config 00:16:15.180 dma/dpaa2: not in enabled drivers build config 00:16:15.180 dma/hisilicon: not in enabled drivers build config 00:16:15.180 dma/idxd: not in enabled drivers build config 00:16:15.180 dma/ioat: not in enabled drivers build config 00:16:15.180 dma/skeleton: not in enabled drivers build config 00:16:15.180 net/af_packet: not in enabled drivers build config 00:16:15.180 net/af_xdp: not in enabled drivers build config 00:16:15.180 net/ark: not in enabled drivers build config 00:16:15.180 net/atlantic: not in enabled drivers build config 00:16:15.180 net/avp: not in enabled drivers build config 00:16:15.180 net/axgbe: not in enabled drivers build config 00:16:15.180 net/bnx2x: not in enabled drivers build config 00:16:15.180 net/bnxt: not in enabled drivers build config 00:16:15.180 net/bonding: not in enabled drivers build config 00:16:15.180 net/cnxk: not in enabled drivers build config 00:16:15.180 net/cpfl: not in enabled drivers build config 00:16:15.180 net/cxgbe: not in enabled drivers build config 00:16:15.180 net/dpaa: not in enabled drivers build config 00:16:15.180 net/dpaa2: not in enabled drivers build config 00:16:15.180 net/e1000: not in enabled drivers build config 00:16:15.180 net/ena: not in enabled drivers build config 00:16:15.180 net/enetc: not in enabled drivers build config 00:16:15.180 net/enetfec: not in enabled drivers build config 00:16:15.180 net/enic: not in enabled drivers build config 00:16:15.180 net/failsafe: not in enabled drivers build config 00:16:15.180 net/fm10k: not in enabled drivers build config 00:16:15.180 net/gve: not in enabled drivers build config 00:16:15.180 net/hinic: not in enabled drivers build config 00:16:15.180 net/hns3: not in enabled drivers build config 00:16:15.180 net/i40e: not in enabled drivers build config 00:16:15.180 net/iavf: not in enabled drivers build config 00:16:15.180 net/ice: not in enabled drivers build config 00:16:15.180 net/idpf: not in enabled drivers build config 00:16:15.180 net/igc: not in enabled drivers build config 00:16:15.180 net/ionic: not in enabled drivers build config 00:16:15.180 net/ipn3ke: not in enabled drivers build config 00:16:15.180 net/ixgbe: not in enabled drivers build config 00:16:15.180 net/mana: not in enabled drivers build config 00:16:15.180 net/memif: not in enabled drivers build config 00:16:15.180 net/mlx4: not in enabled drivers build config 00:16:15.180 net/mlx5: not in enabled drivers build config 00:16:15.180 net/mvneta: not in enabled drivers build config 00:16:15.180 net/mvpp2: not in enabled drivers build config 00:16:15.180 net/netvsc: not in enabled drivers build config 00:16:15.180 net/nfb: not in enabled drivers build config 00:16:15.180 net/nfp: not in enabled drivers build config 00:16:15.180 net/ngbe: not in enabled drivers build config 00:16:15.180 net/null: not in enabled drivers build config 00:16:15.180 net/octeontx: not in enabled drivers build config 00:16:15.180 net/octeon_ep: not in enabled drivers build config 00:16:15.180 net/pcap: not in enabled drivers build config 00:16:15.180 net/pfe: not in enabled drivers build config 00:16:15.180 net/qede: not in enabled drivers build config 00:16:15.180 net/ring: not in enabled drivers build config 00:16:15.180 net/sfc: not in enabled drivers build config 00:16:15.180 net/softnic: not in enabled drivers build config 00:16:15.180 net/tap: not in enabled drivers build config 00:16:15.180 net/thunderx: not in enabled drivers build config 00:16:15.180 net/txgbe: not in enabled drivers build config 00:16:15.180 net/vdev_netvsc: not in enabled drivers build config 00:16:15.180 net/vhost: not in enabled drivers build config 00:16:15.180 net/virtio: not in enabled drivers build config 00:16:15.180 net/vmxnet3: not in enabled drivers build config 00:16:15.180 raw/*: missing internal dependency, "rawdev" 00:16:15.180 crypto/armv8: not in enabled drivers build config 00:16:15.180 crypto/bcmfs: not in enabled drivers build config 00:16:15.180 crypto/caam_jr: not in enabled drivers build config 00:16:15.180 crypto/ccp: not in enabled drivers build config 00:16:15.180 crypto/cnxk: not in enabled drivers build config 00:16:15.180 crypto/dpaa_sec: not in enabled drivers build config 00:16:15.180 crypto/dpaa2_sec: not in enabled drivers build config 00:16:15.180 crypto/ipsec_mb: not in enabled drivers build config 00:16:15.180 crypto/mlx5: not in enabled drivers build config 00:16:15.180 crypto/mvsam: not in enabled drivers build config 00:16:15.180 crypto/nitrox: not in enabled drivers build config 00:16:15.180 crypto/null: not in enabled drivers build config 00:16:15.180 crypto/octeontx: not in enabled drivers build config 00:16:15.180 crypto/openssl: not in enabled drivers build config 00:16:15.180 crypto/scheduler: not in enabled drivers build config 00:16:15.180 crypto/uadk: not in enabled drivers build config 00:16:15.180 crypto/virtio: not in enabled drivers build config 00:16:15.180 compress/isal: not in enabled drivers build config 00:16:15.180 compress/mlx5: not in enabled drivers build config 00:16:15.180 compress/nitrox: not in enabled drivers build config 00:16:15.180 compress/octeontx: not in enabled drivers build config 00:16:15.180 compress/zlib: not in enabled drivers build config 00:16:15.180 regex/*: missing internal dependency, "regexdev" 00:16:15.180 ml/*: missing internal dependency, "mldev" 00:16:15.180 vdpa/ifc: not in enabled drivers build config 00:16:15.180 vdpa/mlx5: not in enabled drivers build config 00:16:15.180 vdpa/nfp: not in enabled drivers build config 00:16:15.180 vdpa/sfc: not in enabled drivers build config 00:16:15.180 event/*: missing internal dependency, "eventdev" 00:16:15.180 baseband/*: missing internal dependency, "bbdev" 00:16:15.180 gpu/*: missing internal dependency, "gpudev" 00:16:15.180 00:16:15.180 00:16:15.180 Build targets in project: 85 00:16:15.180 00:16:15.180 DPDK 24.03.0 00:16:15.180 00:16:15.180 User defined options 00:16:15.180 buildtype : debug 00:16:15.180 default_library : shared 00:16:15.180 libdir : lib 00:16:15.180 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:15.180 b_sanitize : address 00:16:15.180 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:16:15.180 c_link_args : 00:16:15.180 cpu_instruction_set: native 00:16:15.180 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:16:15.180 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:16:15.180 enable_docs : false 00:16:15.180 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:16:15.180 enable_kmods : false 00:16:15.180 max_lcores : 128 00:16:15.180 tests : false 00:16:15.180 00:16:15.180 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:16:15.180 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:16:15.180 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:16:15.180 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:16:15.180 [3/268] Linking static target lib/librte_kvargs.a 00:16:15.180 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:16:15.180 [5/268] Linking static target lib/librte_log.a 00:16:15.439 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:16:15.697 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:16:15.697 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:16:15.697 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:16:15.955 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:16:15.955 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:16:15.955 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:16:15.955 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:16:15.955 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:16:15.955 [15/268] Linking static target lib/librte_telemetry.a 00:16:15.955 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:16:15.955 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:16:16.214 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:16:16.472 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:16:16.472 [20/268] Linking target lib/librte_log.so.24.1 00:16:16.731 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:16:16.731 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:16:16.731 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:16:16.731 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:16:16.731 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:16:16.731 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:16:16.989 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:16:16.989 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:16:16.989 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:16:16.989 [30/268] Linking target lib/librte_kvargs.so.24.1 00:16:16.989 [31/268] Linking target lib/librte_telemetry.so.24.1 00:16:16.989 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:16:17.247 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:16:17.247 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:16:17.247 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:16:17.247 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:16:17.247 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:16:17.247 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:16:17.505 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:16:17.505 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:16:17.505 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:16:17.505 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:16:17.505 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:16:17.763 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:16:17.763 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:16:18.021 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:16:18.021 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:16:18.021 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:16:18.280 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:16:18.280 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:16:18.538 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:16:18.538 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:16:18.538 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:16:18.538 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:16:18.538 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:16:18.797 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:16:18.797 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:16:18.797 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:16:19.055 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:16:19.313 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:16:19.313 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:16:19.313 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:16:19.313 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:16:19.313 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:16:19.313 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:16:19.572 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:16:19.572 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:16:19.572 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:16:19.830 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:16:19.830 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:16:20.089 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:16:20.089 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:16:20.089 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:16:20.089 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:16:20.348 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:16:20.348 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:16:20.348 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:16:20.628 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:16:20.628 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:16:20.628 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:16:20.628 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:16:20.886 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:16:20.886 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:16:20.886 [84/268] Linking static target lib/librte_ring.a 00:16:20.886 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:16:20.886 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:16:21.143 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:16:21.143 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:16:21.143 [89/268] Linking static target lib/librte_eal.a 00:16:21.402 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:16:21.402 [91/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:16:21.402 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:16:21.660 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:16:21.660 [94/268] Linking static target lib/librte_mempool.a 00:16:21.660 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:16:21.660 [96/268] Linking static target lib/librte_rcu.a 00:16:21.918 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:16:21.918 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:16:22.176 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:16:22.176 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:16:22.176 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:16:22.176 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:16:22.435 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:16:22.435 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:16:22.435 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:16:22.435 [106/268] Linking static target lib/librte_net.a 00:16:22.435 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:16:22.693 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:16:22.693 [109/268] Linking static target lib/librte_mbuf.a 00:16:22.693 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:16:22.693 [111/268] Linking static target lib/librte_meter.a 00:16:22.951 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:16:23.210 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:16:23.210 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:16:23.211 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:16:23.211 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:16:23.469 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:16:23.469 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:16:23.730 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:16:23.989 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:16:23.989 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:16:24.557 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:16:24.557 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:16:24.557 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:16:24.815 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:16:24.815 [126/268] Linking static target lib/librte_pci.a 00:16:24.815 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:16:25.074 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:16:25.074 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:16:25.074 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:16:25.074 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:16:25.333 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:16:25.333 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:16:25.333 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:16:25.333 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:25.333 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:16:25.591 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:16:25.591 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:16:25.591 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:16:25.591 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:16:25.591 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:16:25.591 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:16:25.849 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:16:25.849 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:16:25.849 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:16:25.849 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:16:25.849 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:16:25.849 [148/268] Linking static target lib/librte_cmdline.a 00:16:26.417 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:16:26.417 [150/268] Linking static target lib/librte_timer.a 00:16:26.417 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:16:26.417 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:16:26.675 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:16:26.934 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:16:26.934 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:16:27.193 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:16:27.193 [157/268] Linking static target lib/librte_hash.a 00:16:27.193 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:16:27.451 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:16:27.451 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:16:27.710 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:16:27.710 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:16:27.710 [163/268] Linking static target lib/librte_compressdev.a 00:16:27.977 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:16:27.977 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:16:27.977 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:16:27.977 [167/268] Linking static target lib/librte_dmadev.a 00:16:27.977 [168/268] Linking static target lib/librte_ethdev.a 00:16:27.977 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:16:28.236 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:16:28.236 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:16:28.552 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:16:28.828 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:16:28.828 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:16:29.085 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:16:29.085 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:16:29.344 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:29.344 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:29.344 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:16:29.344 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:16:29.344 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:16:29.601 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:16:30.166 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:16:30.166 [184/268] Linking static target lib/librte_reorder.a 00:16:30.166 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:16:30.166 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:16:30.166 [187/268] Linking static target lib/librte_cryptodev.a 00:16:30.423 [188/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:16:30.423 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:16:30.423 [190/268] Linking static target lib/librte_power.a 00:16:30.424 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:16:30.989 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:16:30.989 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:16:31.247 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:16:31.247 [195/268] Linking static target lib/librte_security.a 00:16:31.866 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:16:31.866 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:16:31.866 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:16:31.866 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:16:32.143 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:16:32.143 [201/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:16:32.401 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:16:32.659 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:16:32.659 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:16:32.660 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:16:32.660 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:16:32.918 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:16:32.918 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:16:33.175 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:16:33.175 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:16:33.175 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:16:33.175 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:16:33.175 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:16:33.434 [214/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:33.434 [215/268] Linking static target drivers/librte_bus_pci.a 00:16:33.434 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:16:33.434 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:16:33.434 [218/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:16:33.434 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:16:33.434 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:16:33.434 [221/268] Linking static target drivers/librte_bus_vdev.a 00:16:33.693 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:16:33.693 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:16:33.693 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:16:33.693 [225/268] Linking static target drivers/librte_mempool_ring.a 00:16:33.951 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.209 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:16:34.467 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:16:36.456 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:16:36.456 [230/268] Linking target lib/librte_eal.so.24.1 00:16:36.456 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:16:36.714 [232/268] Linking target lib/librte_pci.so.24.1 00:16:36.714 [233/268] Linking target lib/librte_timer.so.24.1 00:16:36.714 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:16:36.714 [235/268] Linking target lib/librte_ring.so.24.1 00:16:36.714 [236/268] Linking target lib/librte_dmadev.so.24.1 00:16:36.714 [237/268] Linking target lib/librte_meter.so.24.1 00:16:36.714 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:16:36.714 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:16:36.714 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:16:36.714 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:16:36.972 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:16:36.972 [243/268] Linking target lib/librte_rcu.so.24.1 00:16:36.972 [244/268] Linking target lib/librte_mempool.so.24.1 00:16:36.972 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:16:36.972 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:16:36.972 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:16:37.229 [248/268] Linking target lib/librte_mbuf.so.24.1 00:16:37.229 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:16:37.489 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:16:37.489 [251/268] Linking target lib/librte_reorder.so.24.1 00:16:37.489 [252/268] Linking target lib/librte_compressdev.so.24.1 00:16:37.489 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:16:37.489 [254/268] Linking target lib/librte_net.so.24.1 00:16:37.489 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:16:37.747 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:16:37.747 [257/268] Linking target lib/librte_security.so.24.1 00:16:37.747 [258/268] Linking target lib/librte_hash.so.24.1 00:16:37.747 [259/268] Linking target lib/librte_cmdline.so.24.1 00:16:37.747 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:16:37.747 [261/268] Linking target lib/librte_ethdev.so.24.1 00:16:38.005 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:16:38.005 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:16:38.005 [264/268] Linking target lib/librte_power.so.24.1 00:16:39.910 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:16:39.910 [266/268] Linking static target lib/librte_vhost.a 00:16:41.812 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:16:41.812 [268/268] Linking target lib/librte_vhost.so.24.1 00:16:41.812 INFO: autodetecting backend as ninja 00:16:41.812 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:17:08.466 CC lib/log/log.o 00:17:08.466 CC lib/log/log_flags.o 00:17:08.466 CC lib/log/log_deprecated.o 00:17:08.466 CC lib/ut_mock/mock.o 00:17:08.466 CC lib/ut/ut.o 00:17:08.466 LIB libspdk_ut_mock.a 00:17:08.466 SO libspdk_ut_mock.so.6.0 00:17:08.466 LIB libspdk_log.a 00:17:08.466 SYMLINK libspdk_ut_mock.so 00:17:08.466 SO libspdk_log.so.7.1 00:17:08.466 LIB libspdk_ut.a 00:17:08.466 SYMLINK libspdk_log.so 00:17:08.466 SO libspdk_ut.so.2.0 00:17:08.466 SYMLINK libspdk_ut.so 00:17:08.466 CC lib/util/base64.o 00:17:08.466 CC lib/util/cpuset.o 00:17:08.466 CC lib/util/bit_array.o 00:17:08.466 CC lib/util/crc16.o 00:17:08.466 CC lib/dma/dma.o 00:17:08.466 CC lib/util/crc32.o 00:17:08.466 CC lib/util/crc32c.o 00:17:08.466 CXX lib/trace_parser/trace.o 00:17:08.466 CC lib/ioat/ioat.o 00:17:08.466 CC lib/vfio_user/host/vfio_user_pci.o 00:17:08.466 CC lib/util/crc32_ieee.o 00:17:08.466 CC lib/util/crc64.o 00:17:08.466 CC lib/vfio_user/host/vfio_user.o 00:17:08.466 LIB libspdk_dma.a 00:17:08.466 CC lib/util/dif.o 00:17:08.466 CC lib/util/fd.o 00:17:08.466 SO libspdk_dma.so.5.0 00:17:08.466 CC lib/util/fd_group.o 00:17:08.466 SYMLINK libspdk_dma.so 00:17:08.466 CC lib/util/file.o 00:17:08.466 CC lib/util/hexlify.o 00:17:08.466 LIB libspdk_ioat.a 00:17:08.466 CC lib/util/iov.o 00:17:08.466 SO libspdk_ioat.so.7.0 00:17:08.466 CC lib/util/math.o 00:17:08.466 CC lib/util/net.o 00:17:08.466 LIB libspdk_vfio_user.a 00:17:08.466 CC lib/util/pipe.o 00:17:08.466 SO libspdk_vfio_user.so.5.0 00:17:08.466 SYMLINK libspdk_ioat.so 00:17:08.466 CC lib/util/strerror_tls.o 00:17:08.466 CC lib/util/string.o 00:17:08.466 SYMLINK libspdk_vfio_user.so 00:17:08.466 CC lib/util/uuid.o 00:17:08.466 CC lib/util/xor.o 00:17:08.466 CC lib/util/zipf.o 00:17:08.466 CC lib/util/md5.o 00:17:08.466 LIB libspdk_util.a 00:17:08.466 SO libspdk_util.so.10.1 00:17:08.466 LIB libspdk_trace_parser.a 00:17:08.466 SO libspdk_trace_parser.so.6.0 00:17:08.466 SYMLINK libspdk_util.so 00:17:08.466 SYMLINK libspdk_trace_parser.so 00:17:08.466 CC lib/idxd/idxd.o 00:17:08.466 CC lib/idxd/idxd_user.o 00:17:08.466 CC lib/idxd/idxd_kernel.o 00:17:08.466 CC lib/conf/conf.o 00:17:08.466 CC lib/rdma_utils/rdma_utils.o 00:17:08.466 CC lib/json/json_parse.o 00:17:08.466 CC lib/json/json_util.o 00:17:08.466 CC lib/env_dpdk/env.o 00:17:08.466 CC lib/env_dpdk/memory.o 00:17:08.466 CC lib/vmd/vmd.o 00:17:08.466 CC lib/vmd/led.o 00:17:08.466 LIB libspdk_conf.a 00:17:08.466 SO libspdk_conf.so.6.0 00:17:08.466 SYMLINK libspdk_conf.so 00:17:08.466 CC lib/json/json_write.o 00:17:08.466 CC lib/env_dpdk/pci.o 00:17:08.466 CC lib/env_dpdk/init.o 00:17:08.466 CC lib/env_dpdk/threads.o 00:17:08.466 CC lib/env_dpdk/pci_ioat.o 00:17:08.466 LIB libspdk_rdma_utils.a 00:17:08.466 SO libspdk_rdma_utils.so.1.0 00:17:08.466 CC lib/env_dpdk/pci_virtio.o 00:17:08.466 CC lib/env_dpdk/pci_vmd.o 00:17:08.466 SYMLINK libspdk_rdma_utils.so 00:17:08.466 CC lib/env_dpdk/pci_idxd.o 00:17:08.466 LIB libspdk_json.a 00:17:08.466 SO libspdk_json.so.6.0 00:17:08.466 CC lib/env_dpdk/pci_event.o 00:17:08.466 LIB libspdk_idxd.a 00:17:08.466 SYMLINK libspdk_json.so 00:17:08.466 CC lib/env_dpdk/sigbus_handler.o 00:17:08.466 CC lib/env_dpdk/pci_dpdk.o 00:17:08.466 SO libspdk_idxd.so.12.1 00:17:08.466 LIB libspdk_vmd.a 00:17:08.466 SO libspdk_vmd.so.6.0 00:17:08.466 CC lib/env_dpdk/pci_dpdk_2207.o 00:17:08.466 SYMLINK libspdk_idxd.so 00:17:08.466 CC lib/env_dpdk/pci_dpdk_2211.o 00:17:08.466 CC lib/rdma_provider/common.o 00:17:08.466 CC lib/rdma_provider/rdma_provider_verbs.o 00:17:08.466 SYMLINK libspdk_vmd.so 00:17:08.466 CC lib/jsonrpc/jsonrpc_server.o 00:17:08.466 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:17:08.466 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:17:08.466 CC lib/jsonrpc/jsonrpc_client.o 00:17:08.466 LIB libspdk_rdma_provider.a 00:17:08.466 SO libspdk_rdma_provider.so.7.0 00:17:08.466 SYMLINK libspdk_rdma_provider.so 00:17:08.724 LIB libspdk_jsonrpc.a 00:17:08.724 SO libspdk_jsonrpc.so.6.0 00:17:08.724 SYMLINK libspdk_jsonrpc.so 00:17:08.982 LIB libspdk_env_dpdk.a 00:17:09.240 CC lib/rpc/rpc.o 00:17:09.240 SO libspdk_env_dpdk.so.15.1 00:17:09.497 SYMLINK libspdk_env_dpdk.so 00:17:09.497 LIB libspdk_rpc.a 00:17:09.497 SO libspdk_rpc.so.6.0 00:17:09.497 SYMLINK libspdk_rpc.so 00:17:09.813 CC lib/trace/trace_flags.o 00:17:09.813 CC lib/trace/trace.o 00:17:09.813 CC lib/trace/trace_rpc.o 00:17:09.813 CC lib/notify/notify.o 00:17:09.813 CC lib/keyring/keyring_rpc.o 00:17:09.813 CC lib/notify/notify_rpc.o 00:17:09.813 CC lib/keyring/keyring.o 00:17:10.071 LIB libspdk_notify.a 00:17:10.071 SO libspdk_notify.so.6.0 00:17:10.071 LIB libspdk_keyring.a 00:17:10.329 SO libspdk_keyring.so.2.0 00:17:10.329 LIB libspdk_trace.a 00:17:10.329 SYMLINK libspdk_notify.so 00:17:10.329 SO libspdk_trace.so.11.0 00:17:10.329 SYMLINK libspdk_keyring.so 00:17:10.329 SYMLINK libspdk_trace.so 00:17:10.587 CC lib/sock/sock.o 00:17:10.587 CC lib/sock/sock_rpc.o 00:17:10.587 CC lib/thread/thread.o 00:17:10.587 CC lib/thread/iobuf.o 00:17:11.152 LIB libspdk_sock.a 00:17:11.152 SO libspdk_sock.so.10.0 00:17:11.410 SYMLINK libspdk_sock.so 00:17:11.667 CC lib/nvme/nvme_ctrlr.o 00:17:11.667 CC lib/nvme/nvme_ctrlr_cmd.o 00:17:11.667 CC lib/nvme/nvme_fabric.o 00:17:11.667 CC lib/nvme/nvme_ns.o 00:17:11.667 CC lib/nvme/nvme_ns_cmd.o 00:17:11.667 CC lib/nvme/nvme_pcie.o 00:17:11.667 CC lib/nvme/nvme_pcie_common.o 00:17:11.667 CC lib/nvme/nvme_qpair.o 00:17:11.667 CC lib/nvme/nvme.o 00:17:12.602 LIB libspdk_thread.a 00:17:12.602 CC lib/nvme/nvme_quirks.o 00:17:12.602 SO libspdk_thread.so.11.0 00:17:12.602 CC lib/nvme/nvme_transport.o 00:17:12.860 SYMLINK libspdk_thread.so 00:17:12.860 CC lib/nvme/nvme_discovery.o 00:17:12.860 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:17:12.860 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:17:12.860 CC lib/nvme/nvme_tcp.o 00:17:13.178 CC lib/accel/accel.o 00:17:13.178 CC lib/accel/accel_rpc.o 00:17:13.178 CC lib/nvme/nvme_opal.o 00:17:13.437 CC lib/accel/accel_sw.o 00:17:13.437 CC lib/nvme/nvme_io_msg.o 00:17:13.437 CC lib/nvme/nvme_poll_group.o 00:17:13.437 CC lib/nvme/nvme_zns.o 00:17:13.696 CC lib/nvme/nvme_stubs.o 00:17:13.696 CC lib/nvme/nvme_auth.o 00:17:13.955 CC lib/blob/blobstore.o 00:17:13.955 CC lib/init/json_config.o 00:17:13.955 CC lib/init/subsystem.o 00:17:14.240 CC lib/init/subsystem_rpc.o 00:17:14.240 CC lib/init/rpc.o 00:17:14.240 CC lib/blob/request.o 00:17:14.240 CC lib/blob/zeroes.o 00:17:14.498 LIB libspdk_init.a 00:17:14.498 SO libspdk_init.so.6.0 00:17:14.498 CC lib/blob/blob_bs_dev.o 00:17:14.498 SYMLINK libspdk_init.so 00:17:14.498 CC lib/nvme/nvme_cuse.o 00:17:14.498 CC lib/virtio/virtio.o 00:17:14.756 CC lib/fsdev/fsdev.o 00:17:14.756 LIB libspdk_accel.a 00:17:14.756 CC lib/fsdev/fsdev_io.o 00:17:14.756 SO libspdk_accel.so.16.0 00:17:15.014 CC lib/fsdev/fsdev_rpc.o 00:17:15.014 SYMLINK libspdk_accel.so 00:17:15.014 CC lib/nvme/nvme_rdma.o 00:17:15.014 CC lib/virtio/virtio_vfio_user.o 00:17:15.014 CC lib/virtio/virtio_vhost_user.o 00:17:15.014 CC lib/virtio/virtio_pci.o 00:17:15.014 CC lib/event/app.o 00:17:15.272 CC lib/bdev/bdev.o 00:17:15.272 CC lib/bdev/bdev_rpc.o 00:17:15.272 CC lib/bdev/bdev_zone.o 00:17:15.530 CC lib/event/reactor.o 00:17:15.530 LIB libspdk_virtio.a 00:17:15.530 LIB libspdk_fsdev.a 00:17:15.530 SO libspdk_virtio.so.7.0 00:17:15.530 SO libspdk_fsdev.so.2.0 00:17:15.530 CC lib/bdev/part.o 00:17:15.787 SYMLINK libspdk_virtio.so 00:17:15.787 CC lib/bdev/scsi_nvme.o 00:17:15.787 CC lib/event/log_rpc.o 00:17:15.787 SYMLINK libspdk_fsdev.so 00:17:15.787 CC lib/event/app_rpc.o 00:17:15.787 CC lib/event/scheduler_static.o 00:17:16.045 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:17:16.045 LIB libspdk_event.a 00:17:16.045 SO libspdk_event.so.14.0 00:17:16.311 SYMLINK libspdk_event.so 00:17:16.891 LIB libspdk_nvme.a 00:17:16.891 LIB libspdk_fuse_dispatcher.a 00:17:16.891 SO libspdk_fuse_dispatcher.so.1.0 00:17:17.149 SYMLINK libspdk_fuse_dispatcher.so 00:17:17.149 SO libspdk_nvme.so.15.0 00:17:17.713 SYMLINK libspdk_nvme.so 00:17:18.645 LIB libspdk_blob.a 00:17:18.903 SO libspdk_blob.so.11.0 00:17:18.903 SYMLINK libspdk_blob.so 00:17:19.160 LIB libspdk_bdev.a 00:17:19.160 CC lib/blobfs/tree.o 00:17:19.160 CC lib/blobfs/blobfs.o 00:17:19.160 CC lib/lvol/lvol.o 00:17:19.418 SO libspdk_bdev.so.17.0 00:17:19.418 SYMLINK libspdk_bdev.so 00:17:19.983 CC lib/ublk/ublk.o 00:17:19.983 CC lib/ublk/ublk_rpc.o 00:17:19.983 CC lib/nvmf/ctrlr.o 00:17:19.983 CC lib/ftl/ftl_core.o 00:17:19.983 CC lib/nvmf/ctrlr_discovery.o 00:17:19.983 CC lib/ftl/ftl_init.o 00:17:19.983 CC lib/nbd/nbd.o 00:17:19.983 CC lib/scsi/dev.o 00:17:20.241 CC lib/scsi/lun.o 00:17:20.241 CC lib/ftl/ftl_layout.o 00:17:20.241 CC lib/nbd/nbd_rpc.o 00:17:20.499 CC lib/nvmf/ctrlr_bdev.o 00:17:20.499 CC lib/ftl/ftl_debug.o 00:17:20.499 LIB libspdk_blobfs.a 00:17:20.757 SO libspdk_blobfs.so.10.0 00:17:20.757 LIB libspdk_nbd.a 00:17:20.757 SO libspdk_nbd.so.7.0 00:17:20.757 CC lib/scsi/port.o 00:17:20.757 SYMLINK libspdk_blobfs.so 00:17:20.757 CC lib/scsi/scsi.o 00:17:20.757 LIB libspdk_lvol.a 00:17:21.014 CC lib/ftl/ftl_io.o 00:17:21.014 SO libspdk_lvol.so.10.0 00:17:21.014 SYMLINK libspdk_nbd.so 00:17:21.014 CC lib/ftl/ftl_sb.o 00:17:21.014 CC lib/ftl/ftl_l2p.o 00:17:21.014 CC lib/nvmf/subsystem.o 00:17:21.014 CC lib/nvmf/nvmf.o 00:17:21.014 SYMLINK libspdk_lvol.so 00:17:21.014 CC lib/nvmf/nvmf_rpc.o 00:17:21.014 CC lib/scsi/scsi_bdev.o 00:17:21.272 CC lib/ftl/ftl_l2p_flat.o 00:17:21.272 LIB libspdk_ublk.a 00:17:21.272 SO libspdk_ublk.so.3.0 00:17:21.272 CC lib/nvmf/transport.o 00:17:21.272 CC lib/ftl/ftl_nv_cache.o 00:17:21.530 SYMLINK libspdk_ublk.so 00:17:21.530 CC lib/scsi/scsi_pr.o 00:17:21.530 CC lib/scsi/scsi_rpc.o 00:17:21.789 CC lib/scsi/task.o 00:17:21.789 CC lib/ftl/ftl_band.o 00:17:22.048 CC lib/ftl/ftl_band_ops.o 00:17:22.048 CC lib/ftl/ftl_writer.o 00:17:22.306 LIB libspdk_scsi.a 00:17:22.306 CC lib/nvmf/tcp.o 00:17:22.306 SO libspdk_scsi.so.9.0 00:17:22.564 SYMLINK libspdk_scsi.so 00:17:22.564 CC lib/ftl/ftl_rq.o 00:17:22.564 CC lib/nvmf/stubs.o 00:17:22.821 CC lib/ftl/ftl_reloc.o 00:17:23.079 CC lib/ftl/ftl_l2p_cache.o 00:17:23.079 CC lib/iscsi/conn.o 00:17:23.079 CC lib/vhost/vhost.o 00:17:23.337 CC lib/vhost/vhost_rpc.o 00:17:23.337 CC lib/vhost/vhost_scsi.o 00:17:23.608 CC lib/vhost/vhost_blk.o 00:17:23.608 CC lib/vhost/rte_vhost_user.o 00:17:23.608 CC lib/ftl/ftl_p2l.o 00:17:24.193 CC lib/iscsi/init_grp.o 00:17:24.193 CC lib/iscsi/iscsi.o 00:17:24.451 CC lib/ftl/ftl_p2l_log.o 00:17:24.451 CC lib/nvmf/mdns_server.o 00:17:24.709 CC lib/iscsi/param.o 00:17:24.709 CC lib/ftl/mngt/ftl_mngt.o 00:17:24.709 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:17:24.968 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:17:24.968 CC lib/ftl/mngt/ftl_mngt_startup.o 00:17:24.968 CC lib/ftl/mngt/ftl_mngt_md.o 00:17:25.240 CC lib/nvmf/rdma.o 00:17:25.241 CC lib/ftl/mngt/ftl_mngt_misc.o 00:17:25.241 CC lib/iscsi/portal_grp.o 00:17:25.241 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:17:25.241 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:17:25.504 CC lib/nvmf/auth.o 00:17:25.762 CC lib/ftl/mngt/ftl_mngt_band.o 00:17:25.762 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:17:25.762 CC lib/iscsi/tgt_node.o 00:17:25.762 CC lib/iscsi/iscsi_subsystem.o 00:17:25.762 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:17:25.762 CC lib/iscsi/iscsi_rpc.o 00:17:26.019 LIB libspdk_vhost.a 00:17:26.019 SO libspdk_vhost.so.8.0 00:17:26.019 CC lib/iscsi/task.o 00:17:26.278 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:17:26.278 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:17:26.278 SYMLINK libspdk_vhost.so 00:17:26.278 CC lib/ftl/utils/ftl_conf.o 00:17:26.536 CC lib/ftl/utils/ftl_md.o 00:17:26.536 CC lib/ftl/utils/ftl_mempool.o 00:17:26.793 CC lib/ftl/utils/ftl_bitmap.o 00:17:26.793 CC lib/ftl/utils/ftl_property.o 00:17:26.793 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:17:26.793 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:17:26.793 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:17:27.052 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:17:27.052 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:17:27.310 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:17:27.310 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:17:27.310 CC lib/ftl/upgrade/ftl_sb_v3.o 00:17:27.310 CC lib/ftl/upgrade/ftl_sb_v5.o 00:17:27.310 CC lib/ftl/nvc/ftl_nvc_dev.o 00:17:27.569 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:17:27.569 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:17:27.569 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:17:27.569 CC lib/ftl/base/ftl_base_dev.o 00:17:27.569 LIB libspdk_iscsi.a 00:17:27.569 CC lib/ftl/base/ftl_base_bdev.o 00:17:27.827 SO libspdk_iscsi.so.8.0 00:17:27.827 CC lib/ftl/ftl_trace.o 00:17:28.085 SYMLINK libspdk_iscsi.so 00:17:28.085 LIB libspdk_ftl.a 00:17:28.650 SO libspdk_ftl.so.9.0 00:17:28.909 SYMLINK libspdk_ftl.so 00:17:29.168 LIB libspdk_nvmf.a 00:17:29.426 SO libspdk_nvmf.so.20.0 00:17:29.993 SYMLINK libspdk_nvmf.so 00:17:30.252 CC module/env_dpdk/env_dpdk_rpc.o 00:17:30.510 CC module/sock/posix/posix.o 00:17:30.510 CC module/scheduler/dynamic/scheduler_dynamic.o 00:17:30.510 CC module/accel/error/accel_error.o 00:17:30.510 CC module/scheduler/gscheduler/gscheduler.o 00:17:30.510 CC module/blob/bdev/blob_bdev.o 00:17:30.510 CC module/fsdev/aio/fsdev_aio.o 00:17:30.510 CC module/keyring/file/keyring.o 00:17:30.510 CC module/accel/ioat/accel_ioat.o 00:17:30.510 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:17:30.510 LIB libspdk_env_dpdk_rpc.a 00:17:30.510 SO libspdk_env_dpdk_rpc.so.6.0 00:17:30.768 CC module/keyring/file/keyring_rpc.o 00:17:30.768 SYMLINK libspdk_env_dpdk_rpc.so 00:17:30.768 LIB libspdk_scheduler_gscheduler.a 00:17:30.768 SO libspdk_scheduler_gscheduler.so.4.0 00:17:30.768 LIB libspdk_scheduler_dpdk_governor.a 00:17:30.768 CC module/accel/error/accel_error_rpc.o 00:17:30.768 SO libspdk_scheduler_dpdk_governor.so.4.0 00:17:30.768 LIB libspdk_scheduler_dynamic.a 00:17:30.768 CC module/accel/ioat/accel_ioat_rpc.o 00:17:30.768 SYMLINK libspdk_scheduler_gscheduler.so 00:17:30.768 LIB libspdk_keyring_file.a 00:17:30.768 SO libspdk_scheduler_dynamic.so.4.0 00:17:30.768 LIB libspdk_blob_bdev.a 00:17:30.768 SO libspdk_keyring_file.so.2.0 00:17:31.027 SYMLINK libspdk_scheduler_dpdk_governor.so 00:17:31.027 CC module/keyring/linux/keyring.o 00:17:31.027 SO libspdk_blob_bdev.so.11.0 00:17:31.027 CC module/keyring/linux/keyring_rpc.o 00:17:31.027 SYMLINK libspdk_keyring_file.so 00:17:31.027 CC module/fsdev/aio/fsdev_aio_rpc.o 00:17:31.027 SYMLINK libspdk_scheduler_dynamic.so 00:17:31.027 CC module/fsdev/aio/linux_aio_mgr.o 00:17:31.027 LIB libspdk_accel_error.a 00:17:31.027 SYMLINK libspdk_blob_bdev.so 00:17:31.027 SO libspdk_accel_error.so.2.0 00:17:31.027 CC module/accel/dsa/accel_dsa.o 00:17:31.027 LIB libspdk_accel_ioat.a 00:17:31.284 CC module/accel/dsa/accel_dsa_rpc.o 00:17:31.284 SO libspdk_accel_ioat.so.6.0 00:17:31.284 SYMLINK libspdk_accel_error.so 00:17:31.284 LIB libspdk_keyring_linux.a 00:17:31.284 SO libspdk_keyring_linux.so.1.0 00:17:31.284 SYMLINK libspdk_accel_ioat.so 00:17:31.284 SYMLINK libspdk_keyring_linux.so 00:17:31.542 CC module/accel/iaa/accel_iaa.o 00:17:31.542 CC module/bdev/delay/vbdev_delay.o 00:17:31.542 CC module/bdev/error/vbdev_error.o 00:17:31.542 LIB libspdk_accel_dsa.a 00:17:31.542 LIB libspdk_fsdev_aio.a 00:17:31.542 SO libspdk_accel_dsa.so.5.0 00:17:31.542 CC module/bdev/malloc/bdev_malloc.o 00:17:31.542 SO libspdk_fsdev_aio.so.1.0 00:17:31.542 CC module/bdev/lvol/vbdev_lvol.o 00:17:31.801 CC module/bdev/gpt/gpt.o 00:17:31.801 CC module/bdev/null/bdev_null.o 00:17:31.801 SYMLINK libspdk_accel_dsa.so 00:17:31.801 CC module/bdev/malloc/bdev_malloc_rpc.o 00:17:31.801 LIB libspdk_sock_posix.a 00:17:31.801 SYMLINK libspdk_fsdev_aio.so 00:17:31.801 CC module/accel/iaa/accel_iaa_rpc.o 00:17:31.801 CC module/bdev/null/bdev_null_rpc.o 00:17:31.801 SO libspdk_sock_posix.so.6.0 00:17:31.801 CC module/bdev/error/vbdev_error_rpc.o 00:17:31.801 SYMLINK libspdk_sock_posix.so 00:17:31.801 CC module/bdev/delay/vbdev_delay_rpc.o 00:17:32.059 CC module/bdev/gpt/vbdev_gpt.o 00:17:32.059 LIB libspdk_accel_iaa.a 00:17:32.059 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:17:32.059 SO libspdk_accel_iaa.so.3.0 00:17:32.059 LIB libspdk_bdev_error.a 00:17:32.059 SO libspdk_bdev_error.so.6.0 00:17:32.059 SYMLINK libspdk_accel_iaa.so 00:17:32.059 LIB libspdk_bdev_null.a 00:17:32.348 SO libspdk_bdev_null.so.6.0 00:17:32.348 SYMLINK libspdk_bdev_error.so 00:17:32.348 LIB libspdk_bdev_malloc.a 00:17:32.348 SYMLINK libspdk_bdev_null.so 00:17:32.348 SO libspdk_bdev_malloc.so.6.0 00:17:32.348 LIB libspdk_bdev_delay.a 00:17:32.348 SO libspdk_bdev_delay.so.6.0 00:17:32.348 CC module/bdev/passthru/vbdev_passthru.o 00:17:32.348 LIB libspdk_bdev_gpt.a 00:17:32.348 CC module/blobfs/bdev/blobfs_bdev.o 00:17:32.348 CC module/bdev/nvme/bdev_nvme.o 00:17:32.348 SYMLINK libspdk_bdev_malloc.so 00:17:32.348 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:17:32.348 SYMLINK libspdk_bdev_delay.so 00:17:32.348 SO libspdk_bdev_gpt.so.6.0 00:17:32.605 SYMLINK libspdk_bdev_gpt.so 00:17:32.605 CC module/bdev/raid/bdev_raid.o 00:17:32.605 CC module/bdev/split/vbdev_split.o 00:17:32.605 CC module/bdev/split/vbdev_split_rpc.o 00:17:32.605 LIB libspdk_bdev_lvol.a 00:17:32.605 SO libspdk_bdev_lvol.so.6.0 00:17:32.606 CC module/bdev/raid/bdev_raid_rpc.o 00:17:32.606 CC module/bdev/zone_block/vbdev_zone_block.o 00:17:32.863 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:17:32.863 SYMLINK libspdk_bdev_lvol.so 00:17:32.863 CC module/bdev/xnvme/bdev_xnvme.o 00:17:32.863 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:17:32.863 CC module/bdev/raid/bdev_raid_sb.o 00:17:32.863 LIB libspdk_bdev_split.a 00:17:32.863 SO libspdk_bdev_split.so.6.0 00:17:33.121 SYMLINK libspdk_bdev_split.so 00:17:33.121 CC module/bdev/raid/raid0.o 00:17:33.121 CC module/bdev/raid/raid1.o 00:17:33.121 LIB libspdk_blobfs_bdev.a 00:17:33.121 LIB libspdk_bdev_passthru.a 00:17:33.121 SO libspdk_blobfs_bdev.so.6.0 00:17:33.121 SO libspdk_bdev_passthru.so.6.0 00:17:33.121 SYMLINK libspdk_blobfs_bdev.so 00:17:33.121 CC module/bdev/nvme/bdev_nvme_rpc.o 00:17:33.121 LIB libspdk_bdev_zone_block.a 00:17:33.379 SYMLINK libspdk_bdev_passthru.so 00:17:33.379 CC module/bdev/nvme/nvme_rpc.o 00:17:33.379 SO libspdk_bdev_zone_block.so.6.0 00:17:33.379 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:17:33.379 SYMLINK libspdk_bdev_zone_block.so 00:17:33.379 CC module/bdev/raid/concat.o 00:17:33.379 CC module/bdev/aio/bdev_aio.o 00:17:33.379 CC module/bdev/aio/bdev_aio_rpc.o 00:17:33.379 CC module/bdev/nvme/bdev_mdns_client.o 00:17:33.637 LIB libspdk_bdev_xnvme.a 00:17:33.637 CC module/bdev/nvme/vbdev_opal.o 00:17:33.637 SO libspdk_bdev_xnvme.so.3.0 00:17:33.637 SYMLINK libspdk_bdev_xnvme.so 00:17:33.637 CC module/bdev/nvme/vbdev_opal_rpc.o 00:17:33.895 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:17:33.895 LIB libspdk_bdev_aio.a 00:17:33.895 SO libspdk_bdev_aio.so.6.0 00:17:33.895 CC module/bdev/ftl/bdev_ftl.o 00:17:34.153 CC module/bdev/ftl/bdev_ftl_rpc.o 00:17:34.153 CC module/bdev/iscsi/bdev_iscsi.o 00:17:34.153 SYMLINK libspdk_bdev_aio.so 00:17:34.153 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:17:34.153 CC module/bdev/virtio/bdev_virtio_scsi.o 00:17:34.153 CC module/bdev/virtio/bdev_virtio_blk.o 00:17:34.153 CC module/bdev/virtio/bdev_virtio_rpc.o 00:17:34.410 LIB libspdk_bdev_raid.a 00:17:34.410 SO libspdk_bdev_raid.so.6.0 00:17:34.668 SYMLINK libspdk_bdev_raid.so 00:17:34.668 LIB libspdk_bdev_ftl.a 00:17:34.668 LIB libspdk_bdev_iscsi.a 00:17:34.668 SO libspdk_bdev_ftl.so.6.0 00:17:34.668 SO libspdk_bdev_iscsi.so.6.0 00:17:34.668 SYMLINK libspdk_bdev_ftl.so 00:17:34.668 SYMLINK libspdk_bdev_iscsi.so 00:17:34.925 LIB libspdk_bdev_virtio.a 00:17:35.186 SO libspdk_bdev_virtio.so.6.0 00:17:35.186 SYMLINK libspdk_bdev_virtio.so 00:17:37.782 LIB libspdk_bdev_nvme.a 00:17:37.782 SO libspdk_bdev_nvme.so.7.1 00:17:37.782 SYMLINK libspdk_bdev_nvme.so 00:17:38.349 CC module/event/subsystems/keyring/keyring.o 00:17:38.349 CC module/event/subsystems/scheduler/scheduler.o 00:17:38.349 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:17:38.349 CC module/event/subsystems/fsdev/fsdev.o 00:17:38.349 CC module/event/subsystems/iobuf/iobuf.o 00:17:38.349 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:17:38.349 CC module/event/subsystems/sock/sock.o 00:17:38.349 CC module/event/subsystems/vmd/vmd.o 00:17:38.349 CC module/event/subsystems/vmd/vmd_rpc.o 00:17:38.349 LIB libspdk_event_scheduler.a 00:17:38.608 SO libspdk_event_scheduler.so.4.0 00:17:38.608 LIB libspdk_event_keyring.a 00:17:38.608 LIB libspdk_event_sock.a 00:17:38.608 SO libspdk_event_keyring.so.1.0 00:17:38.608 LIB libspdk_event_vhost_blk.a 00:17:38.608 LIB libspdk_event_vmd.a 00:17:38.608 LIB libspdk_event_fsdev.a 00:17:38.608 LIB libspdk_event_iobuf.a 00:17:38.608 SO libspdk_event_sock.so.5.0 00:17:38.608 SO libspdk_event_vhost_blk.so.3.0 00:17:38.608 SO libspdk_event_fsdev.so.1.0 00:17:38.608 SYMLINK libspdk_event_scheduler.so 00:17:38.608 SO libspdk_event_vmd.so.6.0 00:17:38.608 SO libspdk_event_iobuf.so.3.0 00:17:38.608 SYMLINK libspdk_event_keyring.so 00:17:38.608 SYMLINK libspdk_event_vhost_blk.so 00:17:38.608 SYMLINK libspdk_event_sock.so 00:17:38.608 SYMLINK libspdk_event_fsdev.so 00:17:38.866 SYMLINK libspdk_event_iobuf.so 00:17:38.866 SYMLINK libspdk_event_vmd.so 00:17:39.123 CC module/event/subsystems/accel/accel.o 00:17:39.123 LIB libspdk_event_accel.a 00:17:39.381 SO libspdk_event_accel.so.6.0 00:17:39.381 SYMLINK libspdk_event_accel.so 00:17:39.639 CC module/event/subsystems/bdev/bdev.o 00:17:39.897 LIB libspdk_event_bdev.a 00:17:40.156 SO libspdk_event_bdev.so.6.0 00:17:40.156 SYMLINK libspdk_event_bdev.so 00:17:40.414 CC module/event/subsystems/ublk/ublk.o 00:17:40.414 CC module/event/subsystems/nbd/nbd.o 00:17:40.414 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:17:40.414 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:17:40.414 CC module/event/subsystems/scsi/scsi.o 00:17:40.723 LIB libspdk_event_ublk.a 00:17:40.723 LIB libspdk_event_nbd.a 00:17:40.723 SO libspdk_event_ublk.so.3.0 00:17:40.723 LIB libspdk_event_scsi.a 00:17:40.723 SO libspdk_event_nbd.so.6.0 00:17:40.723 SO libspdk_event_scsi.so.6.0 00:17:40.723 SYMLINK libspdk_event_ublk.so 00:17:40.723 LIB libspdk_event_nvmf.a 00:17:40.723 SYMLINK libspdk_event_nbd.so 00:17:40.983 SYMLINK libspdk_event_scsi.so 00:17:40.983 SO libspdk_event_nvmf.so.6.0 00:17:40.983 SYMLINK libspdk_event_nvmf.so 00:17:41.241 CC module/event/subsystems/iscsi/iscsi.o 00:17:41.241 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:17:41.498 LIB libspdk_event_vhost_scsi.a 00:17:41.498 LIB libspdk_event_iscsi.a 00:17:41.498 SO libspdk_event_iscsi.so.6.0 00:17:41.498 SO libspdk_event_vhost_scsi.so.3.0 00:17:41.498 SYMLINK libspdk_event_vhost_scsi.so 00:17:41.498 SYMLINK libspdk_event_iscsi.so 00:17:41.756 SO libspdk.so.6.0 00:17:41.756 SYMLINK libspdk.so 00:17:42.015 CXX app/trace/trace.o 00:17:42.015 CC app/spdk_lspci/spdk_lspci.o 00:17:42.015 CC app/spdk_nvme_perf/perf.o 00:17:42.015 CC app/trace_record/trace_record.o 00:17:42.015 CC app/spdk_nvme_identify/identify.o 00:17:42.015 CC app/iscsi_tgt/iscsi_tgt.o 00:17:42.015 CC app/nvmf_tgt/nvmf_main.o 00:17:42.015 CC app/spdk_tgt/spdk_tgt.o 00:17:42.273 CC test/thread/poller_perf/poller_perf.o 00:17:42.273 CC examples/util/zipf/zipf.o 00:17:42.273 LINK spdk_lspci 00:17:42.273 LINK nvmf_tgt 00:17:42.531 LINK iscsi_tgt 00:17:42.531 LINK zipf 00:17:42.531 LINK poller_perf 00:17:42.531 LINK spdk_tgt 00:17:42.531 LINK spdk_trace_record 00:17:42.789 LINK spdk_trace 00:17:42.789 CC app/spdk_nvme_discover/discovery_aer.o 00:17:43.048 CC app/spdk_top/spdk_top.o 00:17:43.048 TEST_HEADER include/spdk/accel.h 00:17:43.048 TEST_HEADER include/spdk/accel_module.h 00:17:43.048 TEST_HEADER include/spdk/assert.h 00:17:43.048 TEST_HEADER include/spdk/barrier.h 00:17:43.048 TEST_HEADER include/spdk/base64.h 00:17:43.048 TEST_HEADER include/spdk/bdev.h 00:17:43.048 TEST_HEADER include/spdk/bdev_module.h 00:17:43.048 TEST_HEADER include/spdk/bdev_zone.h 00:17:43.048 TEST_HEADER include/spdk/bit_array.h 00:17:43.048 TEST_HEADER include/spdk/bit_pool.h 00:17:43.048 TEST_HEADER include/spdk/blob_bdev.h 00:17:43.048 TEST_HEADER include/spdk/blobfs_bdev.h 00:17:43.048 TEST_HEADER include/spdk/blobfs.h 00:17:43.048 TEST_HEADER include/spdk/blob.h 00:17:43.048 TEST_HEADER include/spdk/conf.h 00:17:43.048 TEST_HEADER include/spdk/config.h 00:17:43.048 TEST_HEADER include/spdk/cpuset.h 00:17:43.048 TEST_HEADER include/spdk/crc16.h 00:17:43.048 TEST_HEADER include/spdk/crc32.h 00:17:43.048 TEST_HEADER include/spdk/crc64.h 00:17:43.048 TEST_HEADER include/spdk/dif.h 00:17:43.048 TEST_HEADER include/spdk/dma.h 00:17:43.048 TEST_HEADER include/spdk/endian.h 00:17:43.048 TEST_HEADER include/spdk/env_dpdk.h 00:17:43.048 CC examples/ioat/perf/perf.o 00:17:43.048 TEST_HEADER include/spdk/env.h 00:17:43.048 TEST_HEADER include/spdk/event.h 00:17:43.048 TEST_HEADER include/spdk/fd_group.h 00:17:43.048 TEST_HEADER include/spdk/fd.h 00:17:43.048 TEST_HEADER include/spdk/file.h 00:17:43.048 TEST_HEADER include/spdk/fsdev.h 00:17:43.048 TEST_HEADER include/spdk/fsdev_module.h 00:17:43.048 TEST_HEADER include/spdk/ftl.h 00:17:43.048 TEST_HEADER include/spdk/fuse_dispatcher.h 00:17:43.048 TEST_HEADER include/spdk/gpt_spec.h 00:17:43.048 TEST_HEADER include/spdk/hexlify.h 00:17:43.048 TEST_HEADER include/spdk/histogram_data.h 00:17:43.048 TEST_HEADER include/spdk/idxd.h 00:17:43.048 TEST_HEADER include/spdk/idxd_spec.h 00:17:43.048 TEST_HEADER include/spdk/init.h 00:17:43.048 TEST_HEADER include/spdk/ioat.h 00:17:43.048 CC test/dma/test_dma/test_dma.o 00:17:43.305 TEST_HEADER include/spdk/ioat_spec.h 00:17:43.305 TEST_HEADER include/spdk/iscsi_spec.h 00:17:43.305 CC examples/ioat/verify/verify.o 00:17:43.305 TEST_HEADER include/spdk/json.h 00:17:43.305 TEST_HEADER include/spdk/jsonrpc.h 00:17:43.305 TEST_HEADER include/spdk/keyring.h 00:17:43.305 TEST_HEADER include/spdk/keyring_module.h 00:17:43.305 TEST_HEADER include/spdk/likely.h 00:17:43.305 TEST_HEADER include/spdk/log.h 00:17:43.305 TEST_HEADER include/spdk/lvol.h 00:17:43.305 TEST_HEADER include/spdk/md5.h 00:17:43.305 TEST_HEADER include/spdk/memory.h 00:17:43.305 TEST_HEADER include/spdk/mmio.h 00:17:43.305 TEST_HEADER include/spdk/nbd.h 00:17:43.305 TEST_HEADER include/spdk/net.h 00:17:43.305 TEST_HEADER include/spdk/notify.h 00:17:43.305 TEST_HEADER include/spdk/nvme.h 00:17:43.305 TEST_HEADER include/spdk/nvme_intel.h 00:17:43.305 TEST_HEADER include/spdk/nvme_ocssd.h 00:17:43.305 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:17:43.305 TEST_HEADER include/spdk/nvme_spec.h 00:17:43.305 TEST_HEADER include/spdk/nvme_zns.h 00:17:43.305 TEST_HEADER include/spdk/nvmf_cmd.h 00:17:43.305 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:17:43.305 CC test/app/bdev_svc/bdev_svc.o 00:17:43.305 TEST_HEADER include/spdk/nvmf.h 00:17:43.305 TEST_HEADER include/spdk/nvmf_spec.h 00:17:43.305 TEST_HEADER include/spdk/nvmf_transport.h 00:17:43.305 TEST_HEADER include/spdk/opal.h 00:17:43.305 TEST_HEADER include/spdk/opal_spec.h 00:17:43.305 LINK spdk_nvme_discover 00:17:43.305 TEST_HEADER include/spdk/pci_ids.h 00:17:43.305 TEST_HEADER include/spdk/pipe.h 00:17:43.305 TEST_HEADER include/spdk/queue.h 00:17:43.305 TEST_HEADER include/spdk/reduce.h 00:17:43.305 TEST_HEADER include/spdk/rpc.h 00:17:43.305 TEST_HEADER include/spdk/scheduler.h 00:17:43.305 TEST_HEADER include/spdk/scsi.h 00:17:43.305 TEST_HEADER include/spdk/scsi_spec.h 00:17:43.305 TEST_HEADER include/spdk/sock.h 00:17:43.306 TEST_HEADER include/spdk/stdinc.h 00:17:43.306 TEST_HEADER include/spdk/string.h 00:17:43.306 TEST_HEADER include/spdk/thread.h 00:17:43.306 TEST_HEADER include/spdk/trace.h 00:17:43.306 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:17:43.306 TEST_HEADER include/spdk/trace_parser.h 00:17:43.306 TEST_HEADER include/spdk/tree.h 00:17:43.306 TEST_HEADER include/spdk/ublk.h 00:17:43.306 TEST_HEADER include/spdk/util.h 00:17:43.306 TEST_HEADER include/spdk/uuid.h 00:17:43.306 TEST_HEADER include/spdk/version.h 00:17:43.306 TEST_HEADER include/spdk/vfio_user_pci.h 00:17:43.306 TEST_HEADER include/spdk/vfio_user_spec.h 00:17:43.306 TEST_HEADER include/spdk/vhost.h 00:17:43.306 TEST_HEADER include/spdk/vmd.h 00:17:43.306 TEST_HEADER include/spdk/xor.h 00:17:43.306 TEST_HEADER include/spdk/zipf.h 00:17:43.306 CXX test/cpp_headers/accel.o 00:17:43.565 LINK verify 00:17:43.565 LINK ioat_perf 00:17:43.565 LINK spdk_nvme_identify 00:17:43.565 LINK bdev_svc 00:17:43.822 CXX test/cpp_headers/accel_module.o 00:17:43.822 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:17:43.822 LINK spdk_nvme_perf 00:17:44.079 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:17:44.079 LINK nvme_fuzz 00:17:44.079 LINK test_dma 00:17:44.337 CXX test/cpp_headers/assert.o 00:17:44.337 CC examples/vmd/lsvmd/lsvmd.o 00:17:44.337 CC examples/idxd/perf/perf.o 00:17:44.337 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:17:44.337 CC examples/interrupt_tgt/interrupt_tgt.o 00:17:44.594 LINK lsvmd 00:17:44.594 CC app/spdk_dd/spdk_dd.o 00:17:44.594 CXX test/cpp_headers/barrier.o 00:17:44.851 LINK interrupt_tgt 00:17:44.851 CC app/fio/nvme/fio_plugin.o 00:17:44.851 CC app/vhost/vhost.o 00:17:44.851 CXX test/cpp_headers/base64.o 00:17:45.110 CC examples/vmd/led/led.o 00:17:45.110 LINK vhost_fuzz 00:17:45.110 LINK vhost 00:17:45.110 CXX test/cpp_headers/bdev.o 00:17:45.110 LINK spdk_dd 00:17:45.110 LINK idxd_perf 00:17:45.110 LINK spdk_top 00:17:45.367 LINK led 00:17:45.367 CC examples/thread/thread/thread_ex.o 00:17:45.367 CXX test/cpp_headers/bdev_module.o 00:17:45.624 CC test/app/histogram_perf/histogram_perf.o 00:17:45.624 LINK spdk_nvme 00:17:45.624 CC app/fio/bdev/fio_plugin.o 00:17:45.624 CC test/env/mem_callbacks/mem_callbacks.o 00:17:45.624 CC test/app/jsoncat/jsoncat.o 00:17:45.883 LINK thread 00:17:45.883 CC test/app/stub/stub.o 00:17:45.883 CXX test/cpp_headers/bdev_zone.o 00:17:45.883 CC examples/sock/hello_world/hello_sock.o 00:17:45.883 LINK jsoncat 00:17:45.883 LINK histogram_perf 00:17:45.883 CC test/env/vtophys/vtophys.o 00:17:46.140 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:17:46.140 LINK vtophys 00:17:46.140 LINK stub 00:17:46.140 CXX test/cpp_headers/bit_array.o 00:17:46.398 LINK hello_sock 00:17:46.398 CC test/env/memory/memory_ut.o 00:17:46.398 CXX test/cpp_headers/bit_pool.o 00:17:46.398 CC test/env/pci/pci_ut.o 00:17:46.398 LINK env_dpdk_post_init 00:17:46.398 LINK mem_callbacks 00:17:46.656 LINK spdk_bdev 00:17:46.656 CXX test/cpp_headers/blob_bdev.o 00:17:46.656 CXX test/cpp_headers/blobfs_bdev.o 00:17:46.656 CXX test/cpp_headers/blobfs.o 00:17:46.656 CXX test/cpp_headers/blob.o 00:17:46.914 CC test/event/event_perf/event_perf.o 00:17:46.914 CC examples/accel/perf/accel_perf.o 00:17:46.914 CC examples/blob/hello_world/hello_blob.o 00:17:47.172 CXX test/cpp_headers/conf.o 00:17:47.172 CC examples/blob/cli/blobcli.o 00:17:47.172 LINK event_perf 00:17:47.431 CC examples/nvme/hello_world/hello_world.o 00:17:47.431 CXX test/cpp_headers/config.o 00:17:47.431 CC examples/fsdev/hello_world/hello_fsdev.o 00:17:47.431 CXX test/cpp_headers/cpuset.o 00:17:47.431 LINK pci_ut 00:17:47.431 LINK hello_blob 00:17:47.431 CC test/event/reactor/reactor.o 00:17:47.701 LINK iscsi_fuzz 00:17:47.701 LINK hello_world 00:17:47.701 CXX test/cpp_headers/crc16.o 00:17:47.701 LINK reactor 00:17:47.701 CXX test/cpp_headers/crc32.o 00:17:47.961 LINK blobcli 00:17:47.961 CC examples/nvme/reconnect/reconnect.o 00:17:47.961 CXX test/cpp_headers/crc64.o 00:17:47.961 LINK accel_perf 00:17:47.961 CC examples/nvme/arbitration/arbitration.o 00:17:47.961 CC examples/nvme/nvme_manage/nvme_manage.o 00:17:47.961 LINK hello_fsdev 00:17:47.961 CC test/event/reactor_perf/reactor_perf.o 00:17:47.961 CXX test/cpp_headers/dif.o 00:17:48.219 CXX test/cpp_headers/dma.o 00:17:48.219 LINK reactor_perf 00:17:48.477 CXX test/cpp_headers/endian.o 00:17:48.477 CC test/rpc_client/rpc_client_test.o 00:17:48.477 LINK reconnect 00:17:48.477 LINK arbitration 00:17:48.477 CC test/nvme/aer/aer.o 00:17:48.477 LINK memory_ut 00:17:48.734 CC test/accel/dif/dif.o 00:17:48.734 LINK rpc_client_test 00:17:48.734 CXX test/cpp_headers/env_dpdk.o 00:17:48.734 CC test/event/app_repeat/app_repeat.o 00:17:48.734 LINK nvme_manage 00:17:48.734 CC test/blobfs/mkfs/mkfs.o 00:17:48.992 LINK app_repeat 00:17:48.992 CC examples/nvme/hotplug/hotplug.o 00:17:48.992 CC test/nvme/reset/reset.o 00:17:48.992 CXX test/cpp_headers/env.o 00:17:48.992 LINK aer 00:17:48.992 CC test/nvme/sgl/sgl.o 00:17:49.250 CC test/nvme/e2edp/nvme_dp.o 00:17:49.250 LINK mkfs 00:17:49.250 CXX test/cpp_headers/event.o 00:17:49.508 CXX test/cpp_headers/fd_group.o 00:17:49.508 LINK hotplug 00:17:49.508 LINK reset 00:17:49.508 CC test/event/scheduler/scheduler.o 00:17:49.508 CC test/lvol/esnap/esnap.o 00:17:49.508 LINK nvme_dp 00:17:49.766 CC examples/nvme/cmb_copy/cmb_copy.o 00:17:49.766 LINK sgl 00:17:49.766 CXX test/cpp_headers/fd.o 00:17:49.766 CXX test/cpp_headers/file.o 00:17:49.766 CC examples/nvme/abort/abort.o 00:17:50.023 LINK scheduler 00:17:50.023 LINK cmb_copy 00:17:50.023 LINK dif 00:17:50.023 CXX test/cpp_headers/fsdev.o 00:17:50.281 CC test/nvme/overhead/overhead.o 00:17:50.281 CC examples/bdev/hello_world/hello_bdev.o 00:17:50.281 CC test/nvme/err_injection/err_injection.o 00:17:50.281 CC examples/bdev/bdevperf/bdevperf.o 00:17:50.281 CC test/nvme/startup/startup.o 00:17:50.539 CXX test/cpp_headers/fsdev_module.o 00:17:50.539 CC test/nvme/reserve/reserve.o 00:17:50.539 CC test/nvme/simple_copy/simple_copy.o 00:17:50.539 LINK hello_bdev 00:17:50.539 LINK startup 00:17:50.797 LINK err_injection 00:17:50.797 LINK overhead 00:17:50.797 LINK abort 00:17:50.797 LINK reserve 00:17:50.797 CXX test/cpp_headers/ftl.o 00:17:51.079 CC test/nvme/connect_stress/connect_stress.o 00:17:51.079 LINK simple_copy 00:17:51.079 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:17:51.079 CC test/nvme/compliance/nvme_compliance.o 00:17:51.079 CC test/nvme/boot_partition/boot_partition.o 00:17:51.381 CC test/nvme/fused_ordering/fused_ordering.o 00:17:51.381 CXX test/cpp_headers/fuse_dispatcher.o 00:17:51.381 CC test/nvme/doorbell_aers/doorbell_aers.o 00:17:51.381 LINK connect_stress 00:17:51.638 LINK boot_partition 00:17:51.639 LINK pmr_persistence 00:17:51.639 LINK fused_ordering 00:17:51.639 CXX test/cpp_headers/gpt_spec.o 00:17:51.639 CC test/nvme/fdp/fdp.o 00:17:51.639 LINK doorbell_aers 00:17:51.639 LINK nvme_compliance 00:17:51.896 CXX test/cpp_headers/hexlify.o 00:17:51.896 CXX test/cpp_headers/histogram_data.o 00:17:51.896 CC test/nvme/cuse/cuse.o 00:17:51.897 CXX test/cpp_headers/idxd.o 00:17:51.897 LINK bdevperf 00:17:51.897 CXX test/cpp_headers/idxd_spec.o 00:17:52.155 CXX test/cpp_headers/init.o 00:17:52.155 CXX test/cpp_headers/ioat.o 00:17:52.155 CXX test/cpp_headers/ioat_spec.o 00:17:52.155 CXX test/cpp_headers/iscsi_spec.o 00:17:52.155 LINK fdp 00:17:52.155 CC test/bdev/bdevio/bdevio.o 00:17:52.412 CXX test/cpp_headers/json.o 00:17:52.412 CXX test/cpp_headers/jsonrpc.o 00:17:52.412 CXX test/cpp_headers/keyring.o 00:17:52.412 CXX test/cpp_headers/keyring_module.o 00:17:52.669 CXX test/cpp_headers/likely.o 00:17:52.669 CXX test/cpp_headers/log.o 00:17:52.669 CXX test/cpp_headers/lvol.o 00:17:52.669 CXX test/cpp_headers/md5.o 00:17:52.669 CXX test/cpp_headers/memory.o 00:17:52.927 CXX test/cpp_headers/mmio.o 00:17:52.927 CC examples/nvmf/nvmf/nvmf.o 00:17:52.927 CXX test/cpp_headers/nbd.o 00:17:52.927 CXX test/cpp_headers/net.o 00:17:52.927 LINK bdevio 00:17:52.927 CXX test/cpp_headers/notify.o 00:17:52.927 CXX test/cpp_headers/nvme.o 00:17:53.185 CXX test/cpp_headers/nvme_intel.o 00:17:53.185 CXX test/cpp_headers/nvme_ocssd.o 00:17:53.185 CXX test/cpp_headers/nvme_ocssd_spec.o 00:17:53.185 CXX test/cpp_headers/nvme_spec.o 00:17:53.185 CXX test/cpp_headers/nvme_zns.o 00:17:53.185 CXX test/cpp_headers/nvmf_cmd.o 00:17:53.185 CXX test/cpp_headers/nvmf_fc_spec.o 00:17:53.185 CXX test/cpp_headers/nvmf.o 00:17:53.448 LINK nvmf 00:17:53.448 CXX test/cpp_headers/nvmf_spec.o 00:17:53.448 CXX test/cpp_headers/nvmf_transport.o 00:17:53.448 CXX test/cpp_headers/opal.o 00:17:53.448 CXX test/cpp_headers/opal_spec.o 00:17:53.448 CXX test/cpp_headers/pci_ids.o 00:17:53.448 CXX test/cpp_headers/pipe.o 00:17:53.709 CXX test/cpp_headers/queue.o 00:17:53.709 CXX test/cpp_headers/reduce.o 00:17:53.709 CXX test/cpp_headers/rpc.o 00:17:53.709 CXX test/cpp_headers/scheduler.o 00:17:53.709 CXX test/cpp_headers/scsi.o 00:17:53.709 CXX test/cpp_headers/scsi_spec.o 00:17:53.967 CXX test/cpp_headers/sock.o 00:17:53.967 CXX test/cpp_headers/stdinc.o 00:17:53.967 CXX test/cpp_headers/string.o 00:17:53.967 CXX test/cpp_headers/thread.o 00:17:53.967 CXX test/cpp_headers/trace.o 00:17:53.967 CXX test/cpp_headers/trace_parser.o 00:17:53.967 CXX test/cpp_headers/tree.o 00:17:54.225 CXX test/cpp_headers/ublk.o 00:17:54.225 CXX test/cpp_headers/util.o 00:17:54.225 CXX test/cpp_headers/uuid.o 00:17:54.225 CXX test/cpp_headers/version.o 00:17:54.225 CXX test/cpp_headers/vfio_user_pci.o 00:17:54.225 CXX test/cpp_headers/vfio_user_spec.o 00:17:54.225 CXX test/cpp_headers/vhost.o 00:17:54.225 CXX test/cpp_headers/vmd.o 00:17:54.483 CXX test/cpp_headers/xor.o 00:17:54.483 CXX test/cpp_headers/zipf.o 00:17:54.741 LINK cuse 00:17:58.937 LINK esnap 00:17:59.196 00:17:59.196 real 1m59.525s 00:17:59.196 user 10m50.271s 00:17:59.196 sys 2m32.455s 00:17:59.196 07:15:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:59.196 07:15:23 make -- common/autotest_common.sh@10 -- $ set +x 00:17:59.196 ************************************ 00:17:59.196 END TEST make 00:17:59.196 ************************************ 00:17:59.196 07:15:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:17:59.196 07:15:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:59.196 07:15:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:59.196 07:15:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:59.197 07:15:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:59.197 07:15:23 -- pm/common@44 -- $ pid=5344 00:17:59.197 07:15:23 -- pm/common@50 -- $ kill -TERM 5344 00:17:59.197 07:15:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:59.197 07:15:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:59.197 07:15:23 -- pm/common@44 -- $ pid=5345 00:17:59.197 07:15:23 -- pm/common@50 -- $ kill -TERM 5345 00:17:59.197 07:15:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:17:59.197 07:15:23 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:59.197 07:15:23 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:59.197 07:15:23 -- common/autotest_common.sh@1693 -- # lcov --version 00:17:59.197 07:15:23 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:59.456 07:15:23 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:59.456 07:15:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.456 07:15:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.456 07:15:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.456 07:15:23 -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.456 07:15:23 -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.456 07:15:23 -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.456 07:15:23 -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.456 07:15:23 -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.456 07:15:23 -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.456 07:15:23 -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.456 07:15:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.456 07:15:23 -- scripts/common.sh@344 -- # case "$op" in 00:17:59.456 07:15:23 -- scripts/common.sh@345 -- # : 1 00:17:59.456 07:15:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.456 07:15:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.456 07:15:23 -- scripts/common.sh@365 -- # decimal 1 00:17:59.456 07:15:23 -- scripts/common.sh@353 -- # local d=1 00:17:59.456 07:15:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.456 07:15:23 -- scripts/common.sh@355 -- # echo 1 00:17:59.456 07:15:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.456 07:15:23 -- scripts/common.sh@366 -- # decimal 2 00:17:59.456 07:15:23 -- scripts/common.sh@353 -- # local d=2 00:17:59.456 07:15:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.456 07:15:23 -- scripts/common.sh@355 -- # echo 2 00:17:59.456 07:15:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.456 07:15:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.456 07:15:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.456 07:15:23 -- scripts/common.sh@368 -- # return 0 00:17:59.456 07:15:23 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.456 07:15:23 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.456 --rc genhtml_branch_coverage=1 00:17:59.456 --rc genhtml_function_coverage=1 00:17:59.456 --rc genhtml_legend=1 00:17:59.456 --rc geninfo_all_blocks=1 00:17:59.456 --rc geninfo_unexecuted_blocks=1 00:17:59.456 00:17:59.456 ' 00:17:59.456 07:15:23 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.456 --rc genhtml_branch_coverage=1 00:17:59.456 --rc genhtml_function_coverage=1 00:17:59.456 --rc genhtml_legend=1 00:17:59.456 --rc geninfo_all_blocks=1 00:17:59.456 --rc geninfo_unexecuted_blocks=1 00:17:59.456 00:17:59.456 ' 00:17:59.456 07:15:23 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.456 --rc genhtml_branch_coverage=1 00:17:59.456 --rc genhtml_function_coverage=1 00:17:59.456 --rc genhtml_legend=1 00:17:59.456 --rc geninfo_all_blocks=1 00:17:59.456 --rc geninfo_unexecuted_blocks=1 00:17:59.456 00:17:59.456 ' 00:17:59.456 07:15:23 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:59.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.456 --rc genhtml_branch_coverage=1 00:17:59.456 --rc genhtml_function_coverage=1 00:17:59.456 --rc genhtml_legend=1 00:17:59.456 --rc geninfo_all_blocks=1 00:17:59.456 --rc geninfo_unexecuted_blocks=1 00:17:59.456 00:17:59.456 ' 00:17:59.456 07:15:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:59.456 07:15:23 -- nvmf/common.sh@7 -- # uname -s 00:17:59.456 07:15:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.456 07:15:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.456 07:15:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.456 07:15:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.456 07:15:23 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.456 07:15:23 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:59.456 07:15:23 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.456 07:15:23 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:59.456 07:15:23 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:300399fd-40ba-4a3f-8d5e-751087a81d1d 00:17:59.456 07:15:23 -- nvmf/common.sh@16 -- # NVME_HOSTID=300399fd-40ba-4a3f-8d5e-751087a81d1d 00:17:59.456 07:15:23 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.456 07:15:23 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:59.456 07:15:23 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:17:59.456 07:15:23 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.456 07:15:23 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:59.456 07:15:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.456 07:15:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.456 07:15:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.456 07:15:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.456 07:15:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.456 07:15:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.456 07:15:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.456 07:15:23 -- paths/export.sh@5 -- # export PATH 00:17:59.457 07:15:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.457 07:15:23 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:59.457 07:15:23 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:59.457 07:15:23 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:59.457 07:15:23 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:59.457 07:15:23 -- nvmf/common.sh@50 -- # : 0 00:17:59.457 07:15:23 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:59.457 07:15:23 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:59.457 07:15:23 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:59.457 07:15:23 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.457 07:15:23 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.457 07:15:23 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:59.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:59.457 07:15:23 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:59.457 07:15:23 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:59.457 07:15:23 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:59.457 07:15:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:17:59.457 07:15:23 -- spdk/autotest.sh@32 -- # uname -s 00:17:59.457 07:15:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:17:59.457 07:15:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:17:59.457 07:15:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:59.457 07:15:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:17:59.457 07:15:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:59.457 07:15:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:17:59.457 07:15:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:17:59.457 07:15:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:17:59.457 07:15:23 -- spdk/autotest.sh@48 -- # udevadm_pid=55134 00:17:59.457 07:15:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:17:59.457 07:15:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:17:59.457 07:15:23 -- pm/common@17 -- # local monitor 00:17:59.457 07:15:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:59.457 07:15:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:59.457 07:15:23 -- pm/common@21 -- # date +%s 00:17:59.457 07:15:23 -- pm/common@25 -- # sleep 1 00:17:59.457 07:15:23 -- pm/common@21 -- # date +%s 00:17:59.457 07:15:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086923 00:17:59.457 07:15:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086923 00:17:59.457 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086923_collect-vmstat.pm.log 00:17:59.457 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086923_collect-cpu-load.pm.log 00:18:00.393 07:15:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:18:00.393 07:15:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:18:00.393 07:15:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.393 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:18:00.393 07:15:24 -- spdk/autotest.sh@59 -- # create_test_list 00:18:00.393 07:15:24 -- common/autotest_common.sh@752 -- # xtrace_disable 00:18:00.393 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:18:00.651 07:15:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:18:00.651 07:15:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:18:00.651 07:15:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:18:00.651 07:15:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:18:00.651 07:15:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:18:00.651 07:15:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:18:00.651 07:15:24 -- common/autotest_common.sh@1457 -- # uname 00:18:00.651 07:15:24 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:18:00.651 07:15:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:18:00.651 07:15:24 -- common/autotest_common.sh@1477 -- # uname 00:18:00.651 07:15:24 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:18:00.651 07:15:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:18:00.651 07:15:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:18:00.651 lcov: LCOV version 1.15 00:18:00.651 07:15:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:18:22.692 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:18:22.692 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:18:44.777 07:16:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:18:44.777 07:16:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.777 07:16:05 -- common/autotest_common.sh@10 -- # set +x 00:18:44.777 07:16:05 -- spdk/autotest.sh@78 -- # rm -f 00:18:44.777 07:16:05 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:44.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:44.777 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:18:44.777 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:44.777 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:18:44.777 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:18:44.777 07:16:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:18:44.777 07:16:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:44.777 07:16:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:44.777 07:16:07 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:18:44.777 07:16:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:44.777 07:16:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:18:44.777 07:16:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:44.777 07:16:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:44.777 07:16:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:18:44.777 07:16:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:44.777 07:16:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:44.777 07:16:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:18:44.777 07:16:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:44.777 07:16:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:44.777 07:16:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:18:44.777 07:16:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:18:44.777 07:16:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:44.777 07:16:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:18:44.777 07:16:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:18:44.777 07:16:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:44.777 07:16:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:44.778 07:16:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:18:44.778 07:16:07 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:18:44.778 07:16:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:18:44.778 07:16:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:44.778 07:16:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:44.778 07:16:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:18:44.778 07:16:07 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:44.778 07:16:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:44.778 07:16:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:44.778 07:16:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:18:44.778 07:16:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:44.778 07:16:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:44.778 07:16:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:18:44.778 07:16:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:18:44.778 07:16:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:18:44.778 No valid GPT data, bailing 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # pt= 00:18:44.778 07:16:07 -- scripts/common.sh@395 -- # return 1 00:18:44.778 07:16:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:18:44.778 1+0 records in 00:18:44.778 1+0 records out 00:18:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100676 s, 104 MB/s 00:18:44.778 07:16:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:44.778 07:16:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:44.778 07:16:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:18:44.778 07:16:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:18:44.778 07:16:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:18:44.778 No valid GPT data, bailing 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # pt= 00:18:44.778 07:16:07 -- scripts/common.sh@395 -- # return 1 00:18:44.778 07:16:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:18:44.778 1+0 records in 00:18:44.778 1+0 records out 00:18:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497563 s, 211 MB/s 00:18:44.778 07:16:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:44.778 07:16:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:44.778 07:16:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:18:44.778 07:16:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:18:44.778 07:16:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:18:44.778 No valid GPT data, bailing 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # pt= 00:18:44.778 07:16:07 -- scripts/common.sh@395 -- # return 1 00:18:44.778 07:16:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:18:44.778 1+0 records in 00:18:44.778 1+0 records out 00:18:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00538787 s, 195 MB/s 00:18:44.778 07:16:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:44.778 07:16:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:44.778 07:16:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:18:44.778 07:16:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:18:44.778 07:16:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:18:44.778 No valid GPT data, bailing 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # pt= 00:18:44.778 07:16:07 -- scripts/common.sh@395 -- # return 1 00:18:44.778 07:16:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:18:44.778 1+0 records in 00:18:44.778 1+0 records out 00:18:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372793 s, 281 MB/s 00:18:44.778 07:16:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:44.778 07:16:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:44.778 07:16:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:18:44.778 07:16:07 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:18:44.778 07:16:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:18:44.778 No valid GPT data, bailing 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # pt= 00:18:44.778 07:16:07 -- scripts/common.sh@395 -- # return 1 00:18:44.778 07:16:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:18:44.778 1+0 records in 00:18:44.778 1+0 records out 00:18:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419948 s, 250 MB/s 00:18:44.778 07:16:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:18:44.778 07:16:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:18:44.778 07:16:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:18:44.778 07:16:07 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:18:44.778 07:16:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:18:44.778 No valid GPT data, bailing 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:18:44.778 07:16:07 -- scripts/common.sh@394 -- # pt= 00:18:44.778 07:16:07 -- scripts/common.sh@395 -- # return 1 00:18:44.778 07:16:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:18:44.778 1+0 records in 00:18:44.778 1+0 records out 00:18:44.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542695 s, 193 MB/s 00:18:44.778 07:16:07 -- spdk/autotest.sh@105 -- # sync 00:18:44.778 07:16:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:18:44.778 07:16:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:18:44.778 07:16:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:18:46.222 07:16:10 -- spdk/autotest.sh@111 -- # uname -s 00:18:46.222 07:16:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:18:46.222 07:16:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:18:46.222 07:16:10 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:18:46.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:47.357 Hugepages 00:18:47.357 node hugesize free / total 00:18:47.357 node0 1048576kB 0 / 0 00:18:47.357 node0 2048kB 0 / 0 00:18:47.357 00:18:47.357 Type BDF Vendor Device NUMA Driver Device Block devices 00:18:47.357 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:18:47.357 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:18:47.616 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:18:47.616 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:18:47.616 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:18:47.876 07:16:11 -- spdk/autotest.sh@117 -- # uname -s 00:18:47.876 07:16:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:18:47.876 07:16:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:18:47.876 07:16:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:48.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:49.050 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:49.050 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:49.050 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:49.050 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:49.310 07:16:13 -- common/autotest_common.sh@1517 -- # sleep 1 00:18:50.247 07:16:14 -- common/autotest_common.sh@1518 -- # bdfs=() 00:18:50.247 07:16:14 -- common/autotest_common.sh@1518 -- # local bdfs 00:18:50.247 07:16:14 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:18:50.247 07:16:14 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:18:50.247 07:16:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:18:50.247 07:16:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:18:50.247 07:16:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:50.247 07:16:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:50.247 07:16:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:18:50.247 07:16:14 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:18:50.247 07:16:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:18:50.247 07:16:14 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:50.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:51.072 Waiting for block devices as requested 00:18:51.072 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:51.072 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:51.379 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:51.379 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:56.747 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:56.747 07:16:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:18:56.747 07:16:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:18:56.747 07:16:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:18:56.747 07:16:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:18:56.747 07:16:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:18:56.747 07:16:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:18:56.747 07:16:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:18:56.747 07:16:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:18:56.747 07:16:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:18:56.747 07:16:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:18:56.747 07:16:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:18:56.747 07:16:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:18:56.747 07:16:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:18:56.747 07:16:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:18:56.747 07:16:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:18:56.747 07:16:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:18:56.747 07:16:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:18:56.747 07:16:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:18:56.747 07:16:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:18:56.747 07:16:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:18:56.747 07:16:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:18:56.747 07:16:20 -- common/autotest_common.sh@1543 -- # continue 00:18:56.747 07:16:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:18:56.747 07:16:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:18:56.747 07:16:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:18:56.747 07:16:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:18:56.747 07:16:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:18:56.747 07:16:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:18:56.747 07:16:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:18:56.747 07:16:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:18:56.747 07:16:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:18:56.747 07:16:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:18:56.747 07:16:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:18:56.747 07:16:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:18:56.747 07:16:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:18:56.747 07:16:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:18:56.747 07:16:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:18:56.747 07:16:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:18:56.747 07:16:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:18:56.747 07:16:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:18:56.747 07:16:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:18:56.747 07:16:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:18:56.748 07:16:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1543 -- # continue 00:18:56.748 07:16:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:18:56.748 07:16:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:18:56.748 07:16:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:18:56.748 07:16:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:18:56.748 07:16:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:18:56.748 07:16:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:18:56.748 07:16:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:18:56.748 07:16:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:18:56.748 07:16:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:18:56.748 07:16:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1543 -- # continue 00:18:56.748 07:16:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:18:56.748 07:16:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:18:56.748 07:16:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:18:56.748 07:16:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:18:56.748 07:16:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:18:56.748 07:16:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:18:56.748 07:16:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:18:56.748 07:16:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:18:56.748 07:16:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:18:56.748 07:16:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:18:56.748 07:16:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:18:56.748 07:16:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:18:56.748 07:16:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:18:56.748 07:16:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:18:56.748 07:16:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:18:56.748 07:16:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:18:56.748 07:16:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:18:56.748 07:16:20 -- common/autotest_common.sh@1543 -- # continue 00:18:56.748 07:16:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:18:56.748 07:16:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.748 07:16:20 -- common/autotest_common.sh@10 -- # set +x 00:18:56.748 07:16:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:18:56.748 07:16:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.748 07:16:20 -- common/autotest_common.sh@10 -- # set +x 00:18:56.748 07:16:20 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:57.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:58.247 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:58.247 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:58.247 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:58.247 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:58.247 07:16:22 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:18:58.247 07:16:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.247 07:16:22 -- common/autotest_common.sh@10 -- # set +x 00:18:58.247 07:16:22 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:18:58.247 07:16:22 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:18:58.247 07:16:22 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:18:58.247 07:16:22 -- common/autotest_common.sh@1563 -- # bdfs=() 00:18:58.247 07:16:22 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:18:58.247 07:16:22 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:18:58.247 07:16:22 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:18:58.247 07:16:22 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:18:58.247 07:16:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:18:58.247 07:16:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:18:58.247 07:16:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:58.247 07:16:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:58.247 07:16:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:18:58.505 07:16:22 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:18:58.505 07:16:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:18:58.505 07:16:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:18:58.505 07:16:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:18:58.505 07:16:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:18:58.505 07:16:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:18:58.505 07:16:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:18:58.505 07:16:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:18:58.505 07:16:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:18:58.505 07:16:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:18:58.505 07:16:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:18:58.505 07:16:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:18:58.505 07:16:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:18:58.505 07:16:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:18:58.505 07:16:22 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:18:58.505 07:16:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:18:58.505 07:16:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:18:58.505 07:16:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:18:58.505 07:16:22 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:18:58.505 07:16:22 -- common/autotest_common.sh@1572 -- # return 0 00:18:58.505 07:16:22 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:18:58.505 07:16:22 -- common/autotest_common.sh@1580 -- # return 0 00:18:58.505 07:16:22 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:18:58.505 07:16:22 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:18:58.505 07:16:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:18:58.505 07:16:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:18:58.505 07:16:22 -- spdk/autotest.sh@149 -- # timing_enter lib 00:18:58.505 07:16:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.505 07:16:22 -- common/autotest_common.sh@10 -- # set +x 00:18:58.505 07:16:22 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:18:58.505 07:16:22 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:18:58.505 07:16:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.505 07:16:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.505 07:16:22 -- common/autotest_common.sh@10 -- # set +x 00:18:58.505 ************************************ 00:18:58.505 START TEST env 00:18:58.505 ************************************ 00:18:58.505 07:16:22 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:18:58.505 * Looking for test storage... 00:18:58.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:18:58.505 07:16:22 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:58.505 07:16:22 env -- common/autotest_common.sh@1693 -- # lcov --version 00:18:58.505 07:16:22 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:58.766 07:16:22 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:58.766 07:16:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.766 07:16:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.766 07:16:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.766 07:16:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.766 07:16:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.766 07:16:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.766 07:16:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.766 07:16:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.766 07:16:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.766 07:16:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.766 07:16:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.766 07:16:22 env -- scripts/common.sh@344 -- # case "$op" in 00:18:58.766 07:16:22 env -- scripts/common.sh@345 -- # : 1 00:18:58.766 07:16:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.766 07:16:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.766 07:16:22 env -- scripts/common.sh@365 -- # decimal 1 00:18:58.766 07:16:22 env -- scripts/common.sh@353 -- # local d=1 00:18:58.766 07:16:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.766 07:16:22 env -- scripts/common.sh@355 -- # echo 1 00:18:58.766 07:16:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.766 07:16:22 env -- scripts/common.sh@366 -- # decimal 2 00:18:58.766 07:16:22 env -- scripts/common.sh@353 -- # local d=2 00:18:58.766 07:16:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.766 07:16:22 env -- scripts/common.sh@355 -- # echo 2 00:18:58.766 07:16:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.766 07:16:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.766 07:16:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.766 07:16:22 env -- scripts/common.sh@368 -- # return 0 00:18:58.766 07:16:22 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.766 07:16:22 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:58.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.766 --rc genhtml_branch_coverage=1 00:18:58.766 --rc genhtml_function_coverage=1 00:18:58.766 --rc genhtml_legend=1 00:18:58.766 --rc geninfo_all_blocks=1 00:18:58.766 --rc geninfo_unexecuted_blocks=1 00:18:58.766 00:18:58.766 ' 00:18:58.766 07:16:22 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:58.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.766 --rc genhtml_branch_coverage=1 00:18:58.766 --rc genhtml_function_coverage=1 00:18:58.766 --rc genhtml_legend=1 00:18:58.766 --rc geninfo_all_blocks=1 00:18:58.766 --rc geninfo_unexecuted_blocks=1 00:18:58.766 00:18:58.766 ' 00:18:58.766 07:16:22 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:58.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.766 --rc genhtml_branch_coverage=1 00:18:58.766 --rc genhtml_function_coverage=1 00:18:58.766 --rc genhtml_legend=1 00:18:58.766 --rc geninfo_all_blocks=1 00:18:58.766 --rc geninfo_unexecuted_blocks=1 00:18:58.766 00:18:58.766 ' 00:18:58.766 07:16:22 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:58.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.766 --rc genhtml_branch_coverage=1 00:18:58.766 --rc genhtml_function_coverage=1 00:18:58.766 --rc genhtml_legend=1 00:18:58.766 --rc geninfo_all_blocks=1 00:18:58.766 --rc geninfo_unexecuted_blocks=1 00:18:58.766 00:18:58.766 ' 00:18:58.766 07:16:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:18:58.766 07:16:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:58.766 07:16:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.766 07:16:22 env -- common/autotest_common.sh@10 -- # set +x 00:18:58.766 ************************************ 00:18:58.766 START TEST env_memory 00:18:58.766 ************************************ 00:18:58.766 07:16:22 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:18:58.766 00:18:58.766 00:18:58.766 CUnit - A unit testing framework for C - Version 2.1-3 00:18:58.766 http://cunit.sourceforge.net/ 00:18:58.766 00:18:58.766 00:18:58.766 Suite: memory 00:18:58.766 Test: alloc and free memory map ...[2024-11-20 07:16:22.856294] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:18:58.766 passed 00:18:58.766 Test: mem map translation ...[2024-11-20 07:16:22.929389] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:18:58.767 [2024-11-20 07:16:22.929704] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:18:58.767 [2024-11-20 07:16:22.930006] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:18:58.767 [2024-11-20 07:16:22.930299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:18:59.025 passed 00:18:59.025 Test: mem map registration ...[2024-11-20 07:16:23.041933] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:18:59.025 [2024-11-20 07:16:23.042188] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:18:59.025 passed 00:18:59.025 Test: mem map adjacent registrations ...passed 00:18:59.025 00:18:59.025 Run Summary: Type Total Ran Passed Failed Inactive 00:18:59.025 suites 1 1 n/a 0 0 00:18:59.025 tests 4 4 4 0 0 00:18:59.025 asserts 152 152 152 0 n/a 00:18:59.025 00:18:59.025 Elapsed time = 0.357 seconds 00:18:59.025 00:18:59.025 real 0m0.412s 00:18:59.025 user 0m0.369s 00:18:59.025 sys 0m0.031s 00:18:59.025 07:16:23 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.025 07:16:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:18:59.025 ************************************ 00:18:59.025 END TEST env_memory 00:18:59.025 ************************************ 00:18:59.284 07:16:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:18:59.284 07:16:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:59.284 07:16:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.284 07:16:23 env -- common/autotest_common.sh@10 -- # set +x 00:18:59.284 ************************************ 00:18:59.284 START TEST env_vtophys 00:18:59.284 ************************************ 00:18:59.284 07:16:23 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:18:59.284 EAL: lib.eal log level changed from notice to debug 00:18:59.284 EAL: Detected lcore 0 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 1 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 2 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 3 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 4 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 5 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 6 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 7 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 8 as core 0 on socket 0 00:18:59.284 EAL: Detected lcore 9 as core 0 on socket 0 00:18:59.284 EAL: Maximum logical cores by configuration: 128 00:18:59.284 EAL: Detected CPU lcores: 10 00:18:59.284 EAL: Detected NUMA nodes: 1 00:18:59.284 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:18:59.284 EAL: Detected shared linkage of DPDK 00:18:59.284 EAL: No shared files mode enabled, IPC will be disabled 00:18:59.284 EAL: Selected IOVA mode 'PA' 00:18:59.284 EAL: Probing VFIO support... 00:18:59.284 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:18:59.284 EAL: VFIO modules not loaded, skipping VFIO support... 00:18:59.284 EAL: Ask a virtual area of 0x2e000 bytes 00:18:59.284 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:18:59.284 EAL: Setting up physically contiguous memory... 00:18:59.284 EAL: Setting maximum number of open files to 524288 00:18:59.284 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:18:59.284 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:18:59.284 EAL: Ask a virtual area of 0x61000 bytes 00:18:59.284 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:18:59.284 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:59.284 EAL: Ask a virtual area of 0x400000000 bytes 00:18:59.284 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:18:59.284 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:18:59.284 EAL: Ask a virtual area of 0x61000 bytes 00:18:59.284 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:18:59.284 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:59.284 EAL: Ask a virtual area of 0x400000000 bytes 00:18:59.284 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:18:59.284 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:18:59.284 EAL: Ask a virtual area of 0x61000 bytes 00:18:59.284 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:18:59.284 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:59.284 EAL: Ask a virtual area of 0x400000000 bytes 00:18:59.284 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:18:59.284 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:18:59.284 EAL: Ask a virtual area of 0x61000 bytes 00:18:59.284 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:18:59.284 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:18:59.284 EAL: Ask a virtual area of 0x400000000 bytes 00:18:59.284 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:18:59.284 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:18:59.284 EAL: Hugepages will be freed exactly as allocated. 00:18:59.284 EAL: No shared files mode enabled, IPC is disabled 00:18:59.284 EAL: No shared files mode enabled, IPC is disabled 00:18:59.284 EAL: TSC frequency is ~2100000 KHz 00:18:59.284 EAL: Main lcore 0 is ready (tid=7ff62931ca40;cpuset=[0]) 00:18:59.284 EAL: Trying to obtain current memory policy. 00:18:59.284 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:59.284 EAL: Restoring previous memory policy: 0 00:18:59.284 EAL: request: mp_malloc_sync 00:18:59.284 EAL: No shared files mode enabled, IPC is disabled 00:18:59.284 EAL: Heap on socket 0 was expanded by 2MB 00:18:59.284 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:18:59.611 EAL: No PCI address specified using 'addr=' in: bus=pci 00:18:59.611 EAL: Mem event callback 'spdk:(nil)' registered 00:18:59.611 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:18:59.611 00:18:59.611 00:18:59.611 CUnit - A unit testing framework for C - Version 2.1-3 00:18:59.611 http://cunit.sourceforge.net/ 00:18:59.611 00:18:59.611 00:18:59.611 Suite: components_suite 00:18:59.870 Test: vtophys_malloc_test ...passed 00:18:59.870 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:18:59.870 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:59.870 EAL: Restoring previous memory policy: 4 00:18:59.870 EAL: Calling mem event callback 'spdk:(nil)' 00:18:59.870 EAL: request: mp_malloc_sync 00:18:59.870 EAL: No shared files mode enabled, IPC is disabled 00:18:59.870 EAL: Heap on socket 0 was expanded by 4MB 00:18:59.870 EAL: Calling mem event callback 'spdk:(nil)' 00:18:59.870 EAL: request: mp_malloc_sync 00:18:59.870 EAL: No shared files mode enabled, IPC is disabled 00:18:59.870 EAL: Heap on socket 0 was shrunk by 4MB 00:18:59.870 EAL: Trying to obtain current memory policy. 00:18:59.870 EAL: Setting policy MPOL_PREFERRED for socket 0 00:18:59.870 EAL: Restoring previous memory policy: 4 00:18:59.870 EAL: Calling mem event callback 'spdk:(nil)' 00:18:59.870 EAL: request: mp_malloc_sync 00:18:59.870 EAL: No shared files mode enabled, IPC is disabled 00:18:59.870 EAL: Heap on socket 0 was expanded by 6MB 00:19:00.129 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.129 EAL: request: mp_malloc_sync 00:19:00.129 EAL: No shared files mode enabled, IPC is disabled 00:19:00.129 EAL: Heap on socket 0 was shrunk by 6MB 00:19:00.129 EAL: Trying to obtain current memory policy. 00:19:00.129 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:00.129 EAL: Restoring previous memory policy: 4 00:19:00.129 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.129 EAL: request: mp_malloc_sync 00:19:00.129 EAL: No shared files mode enabled, IPC is disabled 00:19:00.129 EAL: Heap on socket 0 was expanded by 10MB 00:19:00.129 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.129 EAL: request: mp_malloc_sync 00:19:00.129 EAL: No shared files mode enabled, IPC is disabled 00:19:00.129 EAL: Heap on socket 0 was shrunk by 10MB 00:19:00.129 EAL: Trying to obtain current memory policy. 00:19:00.129 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:00.129 EAL: Restoring previous memory policy: 4 00:19:00.129 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.129 EAL: request: mp_malloc_sync 00:19:00.129 EAL: No shared files mode enabled, IPC is disabled 00:19:00.129 EAL: Heap on socket 0 was expanded by 18MB 00:19:00.129 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.129 EAL: request: mp_malloc_sync 00:19:00.129 EAL: No shared files mode enabled, IPC is disabled 00:19:00.129 EAL: Heap on socket 0 was shrunk by 18MB 00:19:00.129 EAL: Trying to obtain current memory policy. 00:19:00.129 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:00.129 EAL: Restoring previous memory policy: 4 00:19:00.129 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.129 EAL: request: mp_malloc_sync 00:19:00.129 EAL: No shared files mode enabled, IPC is disabled 00:19:00.129 EAL: Heap on socket 0 was expanded by 34MB 00:19:00.129 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.129 EAL: request: mp_malloc_sync 00:19:00.129 EAL: No shared files mode enabled, IPC is disabled 00:19:00.129 EAL: Heap on socket 0 was shrunk by 34MB 00:19:00.129 EAL: Trying to obtain current memory policy. 00:19:00.129 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:00.387 EAL: Restoring previous memory policy: 4 00:19:00.387 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.387 EAL: request: mp_malloc_sync 00:19:00.387 EAL: No shared files mode enabled, IPC is disabled 00:19:00.387 EAL: Heap on socket 0 was expanded by 66MB 00:19:00.387 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.387 EAL: request: mp_malloc_sync 00:19:00.387 EAL: No shared files mode enabled, IPC is disabled 00:19:00.387 EAL: Heap on socket 0 was shrunk by 66MB 00:19:00.644 EAL: Trying to obtain current memory policy. 00:19:00.644 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:00.645 EAL: Restoring previous memory policy: 4 00:19:00.645 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.645 EAL: request: mp_malloc_sync 00:19:00.645 EAL: No shared files mode enabled, IPC is disabled 00:19:00.645 EAL: Heap on socket 0 was expanded by 130MB 00:19:00.902 EAL: Calling mem event callback 'spdk:(nil)' 00:19:00.902 EAL: request: mp_malloc_sync 00:19:00.902 EAL: No shared files mode enabled, IPC is disabled 00:19:00.902 EAL: Heap on socket 0 was shrunk by 130MB 00:19:01.163 EAL: Trying to obtain current memory policy. 00:19:01.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:01.163 EAL: Restoring previous memory policy: 4 00:19:01.163 EAL: Calling mem event callback 'spdk:(nil)' 00:19:01.163 EAL: request: mp_malloc_sync 00:19:01.163 EAL: No shared files mode enabled, IPC is disabled 00:19:01.163 EAL: Heap on socket 0 was expanded by 258MB 00:19:01.729 EAL: Calling mem event callback 'spdk:(nil)' 00:19:01.729 EAL: request: mp_malloc_sync 00:19:01.729 EAL: No shared files mode enabled, IPC is disabled 00:19:01.730 EAL: Heap on socket 0 was shrunk by 258MB 00:19:02.295 EAL: Trying to obtain current memory policy. 00:19:02.295 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:02.295 EAL: Restoring previous memory policy: 4 00:19:02.295 EAL: Calling mem event callback 'spdk:(nil)' 00:19:02.295 EAL: request: mp_malloc_sync 00:19:02.295 EAL: No shared files mode enabled, IPC is disabled 00:19:02.295 EAL: Heap on socket 0 was expanded by 514MB 00:19:03.672 EAL: Calling mem event callback 'spdk:(nil)' 00:19:03.672 EAL: request: mp_malloc_sync 00:19:03.672 EAL: No shared files mode enabled, IPC is disabled 00:19:03.672 EAL: Heap on socket 0 was shrunk by 514MB 00:19:04.610 EAL: Trying to obtain current memory policy. 00:19:04.610 EAL: Setting policy MPOL_PREFERRED for socket 0 00:19:04.610 EAL: Restoring previous memory policy: 4 00:19:04.610 EAL: Calling mem event callback 'spdk:(nil)' 00:19:04.610 EAL: request: mp_malloc_sync 00:19:04.610 EAL: No shared files mode enabled, IPC is disabled 00:19:04.610 EAL: Heap on socket 0 was expanded by 1026MB 00:19:07.180 EAL: Calling mem event callback 'spdk:(nil)' 00:19:07.180 EAL: request: mp_malloc_sync 00:19:07.180 EAL: No shared files mode enabled, IPC is disabled 00:19:07.180 EAL: Heap on socket 0 was shrunk by 1026MB 00:19:09.085 passed 00:19:09.085 00:19:09.085 Run Summary: Type Total Ran Passed Failed Inactive 00:19:09.085 suites 1 1 n/a 0 0 00:19:09.085 tests 2 2 2 0 0 00:19:09.085 asserts 5621 5621 5621 0 n/a 00:19:09.085 00:19:09.085 Elapsed time = 9.521 seconds 00:19:09.085 EAL: Calling mem event callback 'spdk:(nil)' 00:19:09.085 EAL: request: mp_malloc_sync 00:19:09.085 EAL: No shared files mode enabled, IPC is disabled 00:19:09.085 EAL: Heap on socket 0 was shrunk by 2MB 00:19:09.085 EAL: No shared files mode enabled, IPC is disabled 00:19:09.085 EAL: No shared files mode enabled, IPC is disabled 00:19:09.085 EAL: No shared files mode enabled, IPC is disabled 00:19:09.085 00:19:09.085 real 0m9.929s 00:19:09.085 user 0m8.734s 00:19:09.085 sys 0m1.013s 00:19:09.085 07:16:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.085 ************************************ 00:19:09.085 END TEST env_vtophys 00:19:09.085 ************************************ 00:19:09.085 07:16:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:19:09.085 07:16:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:19:09.085 07:16:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:09.085 07:16:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.085 07:16:33 env -- common/autotest_common.sh@10 -- # set +x 00:19:09.085 ************************************ 00:19:09.085 START TEST env_pci 00:19:09.085 ************************************ 00:19:09.085 07:16:33 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:19:09.353 00:19:09.353 00:19:09.353 CUnit - A unit testing framework for C - Version 2.1-3 00:19:09.353 http://cunit.sourceforge.net/ 00:19:09.353 00:19:09.353 00:19:09.353 Suite: pci 00:19:09.353 Test: pci_hook ...[2024-11-20 07:16:33.289227] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58107 has claimed it 00:19:09.353 passed 00:19:09.353 00:19:09.353 Run Summary: Type Total Ran Passed Failed Inactive 00:19:09.353 suites 1 1 n/a 0 0 00:19:09.353 tests 1 1 1 0 0 00:19:09.353 asserts 25 25 25 0 n/a 00:19:09.353 00:19:09.353 Elapsed time = 0.010 seconds 00:19:09.353 EAL: Cannot find device (10000:00:01.0) 00:19:09.353 EAL: Failed to attach device on primary process 00:19:09.353 00:19:09.353 real 0m0.102s 00:19:09.353 user 0m0.041s 00:19:09.353 sys 0m0.060s 00:19:09.353 07:16:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.353 ************************************ 00:19:09.353 END TEST env_pci 00:19:09.353 ************************************ 00:19:09.353 07:16:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:19:09.353 07:16:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:19:09.353 07:16:33 env -- env/env.sh@15 -- # uname 00:19:09.353 07:16:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:19:09.353 07:16:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:19:09.353 07:16:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:19:09.353 07:16:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:09.353 07:16:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.353 07:16:33 env -- common/autotest_common.sh@10 -- # set +x 00:19:09.353 ************************************ 00:19:09.353 START TEST env_dpdk_post_init 00:19:09.353 ************************************ 00:19:09.353 07:16:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:19:09.353 EAL: Detected CPU lcores: 10 00:19:09.353 EAL: Detected NUMA nodes: 1 00:19:09.353 EAL: Detected shared linkage of DPDK 00:19:09.353 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:19:09.624 EAL: Selected IOVA mode 'PA' 00:19:09.624 TELEMETRY: No legacy callbacks, legacy socket not created 00:19:09.624 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:19:09.624 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:19:09.624 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:19:09.624 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:19:09.624 Starting DPDK initialization... 00:19:09.624 Starting SPDK post initialization... 00:19:09.624 SPDK NVMe probe 00:19:09.624 Attaching to 0000:00:10.0 00:19:09.624 Attaching to 0000:00:11.0 00:19:09.624 Attaching to 0000:00:12.0 00:19:09.624 Attaching to 0000:00:13.0 00:19:09.624 Attached to 0000:00:10.0 00:19:09.624 Attached to 0000:00:11.0 00:19:09.624 Attached to 0000:00:13.0 00:19:09.624 Attached to 0000:00:12.0 00:19:09.624 Cleaning up... 00:19:09.624 00:19:09.624 real 0m0.381s 00:19:09.624 user 0m0.133s 00:19:09.624 sys 0m0.147s 00:19:09.624 07:16:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.624 07:16:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:19:09.624 ************************************ 00:19:09.624 END TEST env_dpdk_post_init 00:19:09.624 ************************************ 00:19:09.882 07:16:33 env -- env/env.sh@26 -- # uname 00:19:09.882 07:16:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:19:09.882 07:16:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:19:09.882 07:16:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:09.882 07:16:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.882 07:16:33 env -- common/autotest_common.sh@10 -- # set +x 00:19:09.882 ************************************ 00:19:09.882 START TEST env_mem_callbacks 00:19:09.882 ************************************ 00:19:09.882 07:16:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:19:09.882 EAL: Detected CPU lcores: 10 00:19:09.882 EAL: Detected NUMA nodes: 1 00:19:09.882 EAL: Detected shared linkage of DPDK 00:19:09.882 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:19:09.882 EAL: Selected IOVA mode 'PA' 00:19:09.882 TELEMETRY: No legacy callbacks, legacy socket not created 00:19:09.882 00:19:09.882 00:19:09.882 CUnit - A unit testing framework for C - Version 2.1-3 00:19:09.882 http://cunit.sourceforge.net/ 00:19:09.882 00:19:09.882 00:19:09.882 Suite: memory 00:19:09.882 Test: test ... 00:19:09.882 register 0x200000200000 2097152 00:19:09.882 malloc 3145728 00:19:09.882 register 0x200000400000 4194304 00:19:09.882 buf 0x2000004fffc0 len 3145728 PASSED 00:19:09.882 malloc 64 00:19:09.882 buf 0x2000004ffec0 len 64 PASSED 00:19:09.882 malloc 4194304 00:19:09.882 register 0x200000800000 6291456 00:19:09.882 buf 0x2000009fffc0 len 4194304 PASSED 00:19:09.882 free 0x2000004fffc0 3145728 00:19:09.882 free 0x2000004ffec0 64 00:19:10.141 unregister 0x200000400000 4194304 PASSED 00:19:10.141 free 0x2000009fffc0 4194304 00:19:10.141 unregister 0x200000800000 6291456 PASSED 00:19:10.141 malloc 8388608 00:19:10.141 register 0x200000400000 10485760 00:19:10.141 buf 0x2000005fffc0 len 8388608 PASSED 00:19:10.141 free 0x2000005fffc0 8388608 00:19:10.141 unregister 0x200000400000 10485760 PASSED 00:19:10.141 passed 00:19:10.141 00:19:10.141 Run Summary: Type Total Ran Passed Failed Inactive 00:19:10.141 suites 1 1 n/a 0 0 00:19:10.141 tests 1 1 1 0 0 00:19:10.141 asserts 15 15 15 0 n/a 00:19:10.141 00:19:10.141 Elapsed time = 0.110 seconds 00:19:10.141 00:19:10.141 real 0m0.345s 00:19:10.141 user 0m0.158s 00:19:10.141 sys 0m0.083s 00:19:10.141 07:16:34 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.141 07:16:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:19:10.141 ************************************ 00:19:10.141 END TEST env_mem_callbacks 00:19:10.141 ************************************ 00:19:10.141 ************************************ 00:19:10.141 END TEST env 00:19:10.141 ************************************ 00:19:10.141 00:19:10.141 real 0m11.708s 00:19:10.141 user 0m9.645s 00:19:10.141 sys 0m1.662s 00:19:10.141 07:16:34 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.141 07:16:34 env -- common/autotest_common.sh@10 -- # set +x 00:19:10.141 07:16:34 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:19:10.141 07:16:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:10.141 07:16:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.141 07:16:34 -- common/autotest_common.sh@10 -- # set +x 00:19:10.141 ************************************ 00:19:10.141 START TEST rpc 00:19:10.141 ************************************ 00:19:10.141 07:16:34 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:19:10.400 * Looking for test storage... 00:19:10.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.400 07:16:34 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.400 07:16:34 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.400 07:16:34 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.400 07:16:34 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.400 07:16:34 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.400 07:16:34 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.400 07:16:34 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.400 07:16:34 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.400 07:16:34 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.400 07:16:34 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.400 07:16:34 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.400 07:16:34 rpc -- scripts/common.sh@344 -- # case "$op" in 00:19:10.400 07:16:34 rpc -- scripts/common.sh@345 -- # : 1 00:19:10.400 07:16:34 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.400 07:16:34 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.400 07:16:34 rpc -- scripts/common.sh@365 -- # decimal 1 00:19:10.400 07:16:34 rpc -- scripts/common.sh@353 -- # local d=1 00:19:10.400 07:16:34 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.400 07:16:34 rpc -- scripts/common.sh@355 -- # echo 1 00:19:10.400 07:16:34 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.400 07:16:34 rpc -- scripts/common.sh@366 -- # decimal 2 00:19:10.400 07:16:34 rpc -- scripts/common.sh@353 -- # local d=2 00:19:10.400 07:16:34 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.400 07:16:34 rpc -- scripts/common.sh@355 -- # echo 2 00:19:10.400 07:16:34 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.400 07:16:34 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.400 07:16:34 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.400 07:16:34 rpc -- scripts/common.sh@368 -- # return 0 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.400 --rc genhtml_branch_coverage=1 00:19:10.400 --rc genhtml_function_coverage=1 00:19:10.400 --rc genhtml_legend=1 00:19:10.400 --rc geninfo_all_blocks=1 00:19:10.400 --rc geninfo_unexecuted_blocks=1 00:19:10.400 00:19:10.400 ' 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.400 --rc genhtml_branch_coverage=1 00:19:10.400 --rc genhtml_function_coverage=1 00:19:10.400 --rc genhtml_legend=1 00:19:10.400 --rc geninfo_all_blocks=1 00:19:10.400 --rc geninfo_unexecuted_blocks=1 00:19:10.400 00:19:10.400 ' 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.400 --rc genhtml_branch_coverage=1 00:19:10.400 --rc genhtml_function_coverage=1 00:19:10.400 --rc genhtml_legend=1 00:19:10.400 --rc geninfo_all_blocks=1 00:19:10.400 --rc geninfo_unexecuted_blocks=1 00:19:10.400 00:19:10.400 ' 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.400 --rc genhtml_branch_coverage=1 00:19:10.400 --rc genhtml_function_coverage=1 00:19:10.400 --rc genhtml_legend=1 00:19:10.400 --rc geninfo_all_blocks=1 00:19:10.400 --rc geninfo_unexecuted_blocks=1 00:19:10.400 00:19:10.400 ' 00:19:10.400 07:16:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58245 00:19:10.400 07:16:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:19:10.400 07:16:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:10.400 07:16:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58245 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@835 -- # '[' -z 58245 ']' 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.400 07:16:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.659 [2024-11-20 07:16:34.722453] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:10.659 [2024-11-20 07:16:34.723254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58245 ] 00:19:10.921 [2024-11-20 07:16:34.950384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.182 [2024-11-20 07:16:35.148078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:19:11.182 [2024-11-20 07:16:35.148404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58245' to capture a snapshot of events at runtime. 00:19:11.182 [2024-11-20 07:16:35.148597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.182 [2024-11-20 07:16:35.148665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.182 [2024-11-20 07:16:35.148764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58245 for offline analysis/debug. 00:19:11.182 [2024-11-20 07:16:35.150504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.170 07:16:36 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.170 07:16:36 rpc -- common/autotest_common.sh@868 -- # return 0 00:19:12.170 07:16:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:19:12.170 07:16:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:19:12.170 07:16:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:19:12.170 07:16:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:19:12.170 07:16:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:12.170 07:16:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.170 07:16:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.170 ************************************ 00:19:12.170 START TEST rpc_integrity 00:19:12.170 ************************************ 00:19:12.170 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:19:12.170 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:12.170 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.170 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.170 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.170 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:19:12.170 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:19:12.170 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:19:12.170 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:19:12.170 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.170 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.430 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:19:12.430 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.430 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:19:12.430 { 00:19:12.430 "name": "Malloc0", 00:19:12.430 "aliases": [ 00:19:12.430 "1f78c2a2-2eae-4069-b313-fa1975236c11" 00:19:12.430 ], 00:19:12.430 "product_name": "Malloc disk", 00:19:12.430 "block_size": 512, 00:19:12.430 "num_blocks": 16384, 00:19:12.430 "uuid": "1f78c2a2-2eae-4069-b313-fa1975236c11", 00:19:12.430 "assigned_rate_limits": { 00:19:12.430 "rw_ios_per_sec": 0, 00:19:12.430 "rw_mbytes_per_sec": 0, 00:19:12.430 "r_mbytes_per_sec": 0, 00:19:12.430 "w_mbytes_per_sec": 0 00:19:12.430 }, 00:19:12.430 "claimed": false, 00:19:12.430 "zoned": false, 00:19:12.430 "supported_io_types": { 00:19:12.430 "read": true, 00:19:12.430 "write": true, 00:19:12.430 "unmap": true, 00:19:12.430 "flush": true, 00:19:12.430 "reset": true, 00:19:12.430 "nvme_admin": false, 00:19:12.430 "nvme_io": false, 00:19:12.430 "nvme_io_md": false, 00:19:12.430 "write_zeroes": true, 00:19:12.430 "zcopy": true, 00:19:12.430 "get_zone_info": false, 00:19:12.430 "zone_management": false, 00:19:12.430 "zone_append": false, 00:19:12.430 "compare": false, 00:19:12.430 "compare_and_write": false, 00:19:12.430 "abort": true, 00:19:12.430 "seek_hole": false, 00:19:12.430 "seek_data": false, 00:19:12.430 "copy": true, 00:19:12.430 "nvme_iov_md": false 00:19:12.430 }, 00:19:12.430 "memory_domains": [ 00:19:12.430 { 00:19:12.430 "dma_device_id": "system", 00:19:12.430 "dma_device_type": 1 00:19:12.430 }, 00:19:12.430 { 00:19:12.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.430 "dma_device_type": 2 00:19:12.430 } 00:19:12.430 ], 00:19:12.430 "driver_specific": {} 00:19:12.430 } 00:19:12.430 ]' 00:19:12.430 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:19:12.430 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:19:12.430 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.430 [2024-11-20 07:16:36.411546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:19:12.430 [2024-11-20 07:16:36.411646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.430 [2024-11-20 07:16:36.411697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:12.430 [2024-11-20 07:16:36.411716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.430 [2024-11-20 07:16:36.415160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.430 [2024-11-20 07:16:36.415217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:19:12.430 Passthru0 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.430 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.430 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.430 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:19:12.430 { 00:19:12.430 "name": "Malloc0", 00:19:12.430 "aliases": [ 00:19:12.430 "1f78c2a2-2eae-4069-b313-fa1975236c11" 00:19:12.430 ], 00:19:12.430 "product_name": "Malloc disk", 00:19:12.430 "block_size": 512, 00:19:12.430 "num_blocks": 16384, 00:19:12.430 "uuid": "1f78c2a2-2eae-4069-b313-fa1975236c11", 00:19:12.430 "assigned_rate_limits": { 00:19:12.430 "rw_ios_per_sec": 0, 00:19:12.430 "rw_mbytes_per_sec": 0, 00:19:12.430 "r_mbytes_per_sec": 0, 00:19:12.430 "w_mbytes_per_sec": 0 00:19:12.430 }, 00:19:12.430 "claimed": true, 00:19:12.430 "claim_type": "exclusive_write", 00:19:12.430 "zoned": false, 00:19:12.430 "supported_io_types": { 00:19:12.430 "read": true, 00:19:12.430 "write": true, 00:19:12.430 "unmap": true, 00:19:12.430 "flush": true, 00:19:12.430 "reset": true, 00:19:12.430 "nvme_admin": false, 00:19:12.430 "nvme_io": false, 00:19:12.430 "nvme_io_md": false, 00:19:12.430 "write_zeroes": true, 00:19:12.430 "zcopy": true, 00:19:12.430 "get_zone_info": false, 00:19:12.430 "zone_management": false, 00:19:12.430 "zone_append": false, 00:19:12.430 "compare": false, 00:19:12.430 "compare_and_write": false, 00:19:12.430 "abort": true, 00:19:12.430 "seek_hole": false, 00:19:12.430 "seek_data": false, 00:19:12.430 "copy": true, 00:19:12.430 "nvme_iov_md": false 00:19:12.430 }, 00:19:12.430 "memory_domains": [ 00:19:12.430 { 00:19:12.430 "dma_device_id": "system", 00:19:12.430 "dma_device_type": 1 00:19:12.430 }, 00:19:12.430 { 00:19:12.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.430 "dma_device_type": 2 00:19:12.430 } 00:19:12.430 ], 00:19:12.430 "driver_specific": {} 00:19:12.430 }, 00:19:12.430 { 00:19:12.430 "name": "Passthru0", 00:19:12.430 "aliases": [ 00:19:12.430 "e615cb5b-c97b-5775-b26f-fc046c8954bb" 00:19:12.430 ], 00:19:12.430 "product_name": "passthru", 00:19:12.430 "block_size": 512, 00:19:12.430 "num_blocks": 16384, 00:19:12.430 "uuid": "e615cb5b-c97b-5775-b26f-fc046c8954bb", 00:19:12.430 "assigned_rate_limits": { 00:19:12.430 "rw_ios_per_sec": 0, 00:19:12.430 "rw_mbytes_per_sec": 0, 00:19:12.430 "r_mbytes_per_sec": 0, 00:19:12.430 "w_mbytes_per_sec": 0 00:19:12.430 }, 00:19:12.430 "claimed": false, 00:19:12.430 "zoned": false, 00:19:12.430 "supported_io_types": { 00:19:12.430 "read": true, 00:19:12.430 "write": true, 00:19:12.430 "unmap": true, 00:19:12.430 "flush": true, 00:19:12.431 "reset": true, 00:19:12.431 "nvme_admin": false, 00:19:12.431 "nvme_io": false, 00:19:12.431 "nvme_io_md": false, 00:19:12.431 "write_zeroes": true, 00:19:12.431 "zcopy": true, 00:19:12.431 "get_zone_info": false, 00:19:12.431 "zone_management": false, 00:19:12.431 "zone_append": false, 00:19:12.431 "compare": false, 00:19:12.431 "compare_and_write": false, 00:19:12.431 "abort": true, 00:19:12.431 "seek_hole": false, 00:19:12.431 "seek_data": false, 00:19:12.431 "copy": true, 00:19:12.431 "nvme_iov_md": false 00:19:12.431 }, 00:19:12.431 "memory_domains": [ 00:19:12.431 { 00:19:12.431 "dma_device_id": "system", 00:19:12.431 "dma_device_type": 1 00:19:12.431 }, 00:19:12.431 { 00:19:12.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.431 "dma_device_type": 2 00:19:12.431 } 00:19:12.431 ], 00:19:12.431 "driver_specific": { 00:19:12.431 "passthru": { 00:19:12.431 "name": "Passthru0", 00:19:12.431 "base_bdev_name": "Malloc0" 00:19:12.431 } 00:19:12.431 } 00:19:12.431 } 00:19:12.431 ]' 00:19:12.431 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:19:12.431 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:19:12.431 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.431 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.431 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.431 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:19:12.431 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:19:12.431 ************************************ 00:19:12.431 END TEST rpc_integrity 00:19:12.431 ************************************ 00:19:12.431 07:16:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:19:12.431 00:19:12.431 real 0m0.333s 00:19:12.431 user 0m0.173s 00:19:12.431 sys 0m0.052s 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.431 07:16:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:12.689 07:16:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:19:12.689 07:16:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:12.689 07:16:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.689 07:16:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.689 ************************************ 00:19:12.689 START TEST rpc_plugins 00:19:12.689 ************************************ 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:19:12.689 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.689 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:19:12.689 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.689 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:19:12.689 { 00:19:12.689 "name": "Malloc1", 00:19:12.689 "aliases": [ 00:19:12.689 "72373a07-cdc8-49a7-88ec-7e68cbf70209" 00:19:12.689 ], 00:19:12.689 "product_name": "Malloc disk", 00:19:12.689 "block_size": 4096, 00:19:12.689 "num_blocks": 256, 00:19:12.689 "uuid": "72373a07-cdc8-49a7-88ec-7e68cbf70209", 00:19:12.689 "assigned_rate_limits": { 00:19:12.689 "rw_ios_per_sec": 0, 00:19:12.689 "rw_mbytes_per_sec": 0, 00:19:12.689 "r_mbytes_per_sec": 0, 00:19:12.689 "w_mbytes_per_sec": 0 00:19:12.689 }, 00:19:12.689 "claimed": false, 00:19:12.689 "zoned": false, 00:19:12.689 "supported_io_types": { 00:19:12.689 "read": true, 00:19:12.689 "write": true, 00:19:12.689 "unmap": true, 00:19:12.689 "flush": true, 00:19:12.689 "reset": true, 00:19:12.689 "nvme_admin": false, 00:19:12.689 "nvme_io": false, 00:19:12.689 "nvme_io_md": false, 00:19:12.689 "write_zeroes": true, 00:19:12.689 "zcopy": true, 00:19:12.689 "get_zone_info": false, 00:19:12.689 "zone_management": false, 00:19:12.689 "zone_append": false, 00:19:12.689 "compare": false, 00:19:12.689 "compare_and_write": false, 00:19:12.689 "abort": true, 00:19:12.689 "seek_hole": false, 00:19:12.689 "seek_data": false, 00:19:12.689 "copy": true, 00:19:12.689 "nvme_iov_md": false 00:19:12.689 }, 00:19:12.689 "memory_domains": [ 00:19:12.689 { 00:19:12.689 "dma_device_id": "system", 00:19:12.689 "dma_device_type": 1 00:19:12.689 }, 00:19:12.689 { 00:19:12.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.689 "dma_device_type": 2 00:19:12.689 } 00:19:12.689 ], 00:19:12.689 "driver_specific": {} 00:19:12.689 } 00:19:12.689 ]' 00:19:12.689 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:19:12.689 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:19:12.689 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.689 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:12.690 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.690 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:19:12.690 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.690 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:12.690 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.690 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:19:12.690 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:19:12.690 ************************************ 00:19:12.690 END TEST rpc_plugins 00:19:12.690 ************************************ 00:19:12.690 07:16:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:19:12.690 00:19:12.690 real 0m0.161s 00:19:12.690 user 0m0.094s 00:19:12.690 sys 0m0.027s 00:19:12.690 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.690 07:16:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:19:12.690 07:16:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:19:12.690 07:16:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:12.690 07:16:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.690 07:16:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.690 ************************************ 00:19:12.690 START TEST rpc_trace_cmd_test 00:19:12.690 ************************************ 00:19:12.690 07:16:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:19:12.690 07:16:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:19:12.690 07:16:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:19:12.690 07:16:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.690 07:16:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.948 07:16:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.948 07:16:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:19:12.948 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58245", 00:19:12.948 "tpoint_group_mask": "0x8", 00:19:12.948 "iscsi_conn": { 00:19:12.948 "mask": "0x2", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "scsi": { 00:19:12.948 "mask": "0x4", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "bdev": { 00:19:12.948 "mask": "0x8", 00:19:12.948 "tpoint_mask": "0xffffffffffffffff" 00:19:12.948 }, 00:19:12.948 "nvmf_rdma": { 00:19:12.948 "mask": "0x10", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "nvmf_tcp": { 00:19:12.948 "mask": "0x20", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "ftl": { 00:19:12.948 "mask": "0x40", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "blobfs": { 00:19:12.948 "mask": "0x80", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "dsa": { 00:19:12.948 "mask": "0x200", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "thread": { 00:19:12.948 "mask": "0x400", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "nvme_pcie": { 00:19:12.948 "mask": "0x800", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "iaa": { 00:19:12.948 "mask": "0x1000", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "nvme_tcp": { 00:19:12.948 "mask": "0x2000", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "bdev_nvme": { 00:19:12.948 "mask": "0x4000", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "sock": { 00:19:12.948 "mask": "0x8000", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "blob": { 00:19:12.948 "mask": "0x10000", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "bdev_raid": { 00:19:12.948 "mask": "0x20000", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 }, 00:19:12.948 "scheduler": { 00:19:12.948 "mask": "0x40000", 00:19:12.948 "tpoint_mask": "0x0" 00:19:12.948 } 00:19:12.948 }' 00:19:12.948 07:16:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:19:12.948 07:16:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:19:12.948 07:16:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:19:12.948 07:16:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:19:12.948 07:16:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:19:12.948 07:16:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:19:12.948 07:16:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:19:12.948 07:16:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:19:12.948 07:16:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:19:12.948 ************************************ 00:19:12.948 END TEST rpc_trace_cmd_test 00:19:12.948 ************************************ 00:19:12.948 07:16:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:19:12.948 00:19:12.948 real 0m0.220s 00:19:12.948 user 0m0.182s 00:19:12.948 sys 0m0.029s 00:19:12.948 07:16:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.948 07:16:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.207 07:16:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:19:13.207 07:16:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:19:13.207 07:16:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:19:13.207 07:16:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:13.207 07:16:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.207 07:16:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.207 ************************************ 00:19:13.207 START TEST rpc_daemon_integrity 00:19:13.207 ************************************ 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:19:13.207 { 00:19:13.207 "name": "Malloc2", 00:19:13.207 "aliases": [ 00:19:13.207 "c5c76203-047c-4815-bf5c-f4f62cfd4a22" 00:19:13.207 ], 00:19:13.207 "product_name": "Malloc disk", 00:19:13.207 "block_size": 512, 00:19:13.207 "num_blocks": 16384, 00:19:13.207 "uuid": "c5c76203-047c-4815-bf5c-f4f62cfd4a22", 00:19:13.207 "assigned_rate_limits": { 00:19:13.207 "rw_ios_per_sec": 0, 00:19:13.207 "rw_mbytes_per_sec": 0, 00:19:13.207 "r_mbytes_per_sec": 0, 00:19:13.207 "w_mbytes_per_sec": 0 00:19:13.207 }, 00:19:13.207 "claimed": false, 00:19:13.207 "zoned": false, 00:19:13.207 "supported_io_types": { 00:19:13.207 "read": true, 00:19:13.207 "write": true, 00:19:13.207 "unmap": true, 00:19:13.207 "flush": true, 00:19:13.207 "reset": true, 00:19:13.207 "nvme_admin": false, 00:19:13.207 "nvme_io": false, 00:19:13.207 "nvme_io_md": false, 00:19:13.207 "write_zeroes": true, 00:19:13.207 "zcopy": true, 00:19:13.207 "get_zone_info": false, 00:19:13.207 "zone_management": false, 00:19:13.207 "zone_append": false, 00:19:13.207 "compare": false, 00:19:13.207 "compare_and_write": false, 00:19:13.207 "abort": true, 00:19:13.207 "seek_hole": false, 00:19:13.207 "seek_data": false, 00:19:13.207 "copy": true, 00:19:13.207 "nvme_iov_md": false 00:19:13.207 }, 00:19:13.207 "memory_domains": [ 00:19:13.207 { 00:19:13.207 "dma_device_id": "system", 00:19:13.207 "dma_device_type": 1 00:19:13.207 }, 00:19:13.207 { 00:19:13.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.207 "dma_device_type": 2 00:19:13.207 } 00:19:13.207 ], 00:19:13.207 "driver_specific": {} 00:19:13.207 } 00:19:13.207 ]' 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.207 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.207 [2024-11-20 07:16:37.318871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:19:13.207 [2024-11-20 07:16:37.319149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.207 [2024-11-20 07:16:37.319193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:13.207 [2024-11-20 07:16:37.319213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.207 [2024-11-20 07:16:37.322657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.208 [2024-11-20 07:16:37.322842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:19:13.208 Passthru0 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:19:13.208 { 00:19:13.208 "name": "Malloc2", 00:19:13.208 "aliases": [ 00:19:13.208 "c5c76203-047c-4815-bf5c-f4f62cfd4a22" 00:19:13.208 ], 00:19:13.208 "product_name": "Malloc disk", 00:19:13.208 "block_size": 512, 00:19:13.208 "num_blocks": 16384, 00:19:13.208 "uuid": "c5c76203-047c-4815-bf5c-f4f62cfd4a22", 00:19:13.208 "assigned_rate_limits": { 00:19:13.208 "rw_ios_per_sec": 0, 00:19:13.208 "rw_mbytes_per_sec": 0, 00:19:13.208 "r_mbytes_per_sec": 0, 00:19:13.208 "w_mbytes_per_sec": 0 00:19:13.208 }, 00:19:13.208 "claimed": true, 00:19:13.208 "claim_type": "exclusive_write", 00:19:13.208 "zoned": false, 00:19:13.208 "supported_io_types": { 00:19:13.208 "read": true, 00:19:13.208 "write": true, 00:19:13.208 "unmap": true, 00:19:13.208 "flush": true, 00:19:13.208 "reset": true, 00:19:13.208 "nvme_admin": false, 00:19:13.208 "nvme_io": false, 00:19:13.208 "nvme_io_md": false, 00:19:13.208 "write_zeroes": true, 00:19:13.208 "zcopy": true, 00:19:13.208 "get_zone_info": false, 00:19:13.208 "zone_management": false, 00:19:13.208 "zone_append": false, 00:19:13.208 "compare": false, 00:19:13.208 "compare_and_write": false, 00:19:13.208 "abort": true, 00:19:13.208 "seek_hole": false, 00:19:13.208 "seek_data": false, 00:19:13.208 "copy": true, 00:19:13.208 "nvme_iov_md": false 00:19:13.208 }, 00:19:13.208 "memory_domains": [ 00:19:13.208 { 00:19:13.208 "dma_device_id": "system", 00:19:13.208 "dma_device_type": 1 00:19:13.208 }, 00:19:13.208 { 00:19:13.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.208 "dma_device_type": 2 00:19:13.208 } 00:19:13.208 ], 00:19:13.208 "driver_specific": {} 00:19:13.208 }, 00:19:13.208 { 00:19:13.208 "name": "Passthru0", 00:19:13.208 "aliases": [ 00:19:13.208 "071b85fc-6fdc-58bc-b1ab-f5140ee7ddef" 00:19:13.208 ], 00:19:13.208 "product_name": "passthru", 00:19:13.208 "block_size": 512, 00:19:13.208 "num_blocks": 16384, 00:19:13.208 "uuid": "071b85fc-6fdc-58bc-b1ab-f5140ee7ddef", 00:19:13.208 "assigned_rate_limits": { 00:19:13.208 "rw_ios_per_sec": 0, 00:19:13.208 "rw_mbytes_per_sec": 0, 00:19:13.208 "r_mbytes_per_sec": 0, 00:19:13.208 "w_mbytes_per_sec": 0 00:19:13.208 }, 00:19:13.208 "claimed": false, 00:19:13.208 "zoned": false, 00:19:13.208 "supported_io_types": { 00:19:13.208 "read": true, 00:19:13.208 "write": true, 00:19:13.208 "unmap": true, 00:19:13.208 "flush": true, 00:19:13.208 "reset": true, 00:19:13.208 "nvme_admin": false, 00:19:13.208 "nvme_io": false, 00:19:13.208 "nvme_io_md": false, 00:19:13.208 "write_zeroes": true, 00:19:13.208 "zcopy": true, 00:19:13.208 "get_zone_info": false, 00:19:13.208 "zone_management": false, 00:19:13.208 "zone_append": false, 00:19:13.208 "compare": false, 00:19:13.208 "compare_and_write": false, 00:19:13.208 "abort": true, 00:19:13.208 "seek_hole": false, 00:19:13.208 "seek_data": false, 00:19:13.208 "copy": true, 00:19:13.208 "nvme_iov_md": false 00:19:13.208 }, 00:19:13.208 "memory_domains": [ 00:19:13.208 { 00:19:13.208 "dma_device_id": "system", 00:19:13.208 "dma_device_type": 1 00:19:13.208 }, 00:19:13.208 { 00:19:13.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.208 "dma_device_type": 2 00:19:13.208 } 00:19:13.208 ], 00:19:13.208 "driver_specific": { 00:19:13.208 "passthru": { 00:19:13.208 "name": "Passthru0", 00:19:13.208 "base_bdev_name": "Malloc2" 00:19:13.208 } 00:19:13.208 } 00:19:13.208 } 00:19:13.208 ]' 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.208 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:19:13.467 ************************************ 00:19:13.467 END TEST rpc_daemon_integrity 00:19:13.467 ************************************ 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:19:13.467 00:19:13.467 real 0m0.353s 00:19:13.467 user 0m0.181s 00:19:13.467 sys 0m0.058s 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.467 07:16:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:19:13.467 07:16:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:13.467 07:16:37 rpc -- rpc/rpc.sh@84 -- # killprocess 58245 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 58245 ']' 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@958 -- # kill -0 58245 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@959 -- # uname 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58245 00:19:13.467 killing process with pid 58245 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58245' 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@973 -- # kill 58245 00:19:13.467 07:16:37 rpc -- common/autotest_common.sh@978 -- # wait 58245 00:19:16.750 ************************************ 00:19:16.750 END TEST rpc 00:19:16.750 ************************************ 00:19:16.750 00:19:16.750 real 0m6.185s 00:19:16.750 user 0m6.534s 00:19:16.750 sys 0m1.237s 00:19:16.750 07:16:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.750 07:16:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.750 07:16:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:19:16.750 07:16:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:16.750 07:16:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.750 07:16:40 -- common/autotest_common.sh@10 -- # set +x 00:19:16.750 ************************************ 00:19:16.750 START TEST skip_rpc 00:19:16.750 ************************************ 00:19:16.750 07:16:40 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:19:16.750 * Looking for test storage... 00:19:16.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:19:16.750 07:16:40 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.750 07:16:40 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.750 07:16:40 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.750 07:16:40 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@345 -- # : 1 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:19:16.750 07:16:40 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.751 07:16:40 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.751 07:16:40 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.751 07:16:40 skip_rpc -- scripts/common.sh@368 -- # return 0 00:19:16.751 07:16:40 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.751 07:16:40 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.751 --rc genhtml_branch_coverage=1 00:19:16.751 --rc genhtml_function_coverage=1 00:19:16.751 --rc genhtml_legend=1 00:19:16.751 --rc geninfo_all_blocks=1 00:19:16.751 --rc geninfo_unexecuted_blocks=1 00:19:16.751 00:19:16.751 ' 00:19:16.751 07:16:40 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.751 --rc genhtml_branch_coverage=1 00:19:16.751 --rc genhtml_function_coverage=1 00:19:16.751 --rc genhtml_legend=1 00:19:16.751 --rc geninfo_all_blocks=1 00:19:16.751 --rc geninfo_unexecuted_blocks=1 00:19:16.751 00:19:16.751 ' 00:19:16.751 07:16:40 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.751 --rc genhtml_branch_coverage=1 00:19:16.751 --rc genhtml_function_coverage=1 00:19:16.751 --rc genhtml_legend=1 00:19:16.751 --rc geninfo_all_blocks=1 00:19:16.751 --rc geninfo_unexecuted_blocks=1 00:19:16.751 00:19:16.751 ' 00:19:16.751 07:16:40 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.751 --rc genhtml_branch_coverage=1 00:19:16.751 --rc genhtml_function_coverage=1 00:19:16.751 --rc genhtml_legend=1 00:19:16.751 --rc geninfo_all_blocks=1 00:19:16.751 --rc geninfo_unexecuted_blocks=1 00:19:16.751 00:19:16.751 ' 00:19:16.751 07:16:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:19:16.751 07:16:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:19:16.751 07:16:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:19:16.751 07:16:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:16.751 07:16:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.751 07:16:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.751 ************************************ 00:19:16.751 START TEST skip_rpc 00:19:16.751 ************************************ 00:19:16.751 07:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:19:16.751 07:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58485 00:19:16.751 07:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:16.751 07:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:19:16.751 07:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:19:16.751 [2024-11-20 07:16:40.922101] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:16.751 [2024-11-20 07:16:40.922554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58485 ] 00:19:17.008 [2024-11-20 07:16:41.119504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.265 [2024-11-20 07:16:41.332296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.526 07:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:19:22.526 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:19:22.526 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:19:22.526 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:22.526 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58485 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58485 ']' 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58485 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58485 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58485' 00:19:22.527 killing process with pid 58485 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58485 00:19:22.527 07:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58485 00:19:24.429 ************************************ 00:19:24.429 END TEST skip_rpc 00:19:24.429 ************************************ 00:19:24.429 00:19:24.429 real 0m7.738s 00:19:24.429 user 0m6.999s 00:19:24.429 sys 0m0.640s 00:19:24.429 07:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.429 07:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.429 07:16:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:19:24.429 07:16:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.429 07:16:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.429 07:16:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:24.429 ************************************ 00:19:24.429 START TEST skip_rpc_with_json 00:19:24.429 ************************************ 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:19:24.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58589 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58589 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58589 ']' 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.429 07:16:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:24.689 [2024-11-20 07:16:48.742235] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:24.689 [2024-11-20 07:16:48.743976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58589 ] 00:19:24.949 [2024-11-20 07:16:48.941213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.949 [2024-11-20 07:16:49.062687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:25.895 [2024-11-20 07:16:49.991847] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:19:25.895 request: 00:19:25.895 { 00:19:25.895 "trtype": "tcp", 00:19:25.895 "method": "nvmf_get_transports", 00:19:25.895 "req_id": 1 00:19:25.895 } 00:19:25.895 Got JSON-RPC error response 00:19:25.895 response: 00:19:25.895 { 00:19:25.895 "code": -19, 00:19:25.895 "message": "No such device" 00:19:25.895 } 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.895 07:16:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:25.895 [2024-11-20 07:16:50.004017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.895 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.895 07:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:19:25.895 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.895 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:26.155 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.155 07:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:19:26.155 { 00:19:26.155 "subsystems": [ 00:19:26.155 { 00:19:26.155 "subsystem": "fsdev", 00:19:26.155 "config": [ 00:19:26.155 { 00:19:26.155 "method": "fsdev_set_opts", 00:19:26.155 "params": { 00:19:26.155 "fsdev_io_pool_size": 65535, 00:19:26.155 "fsdev_io_cache_size": 256 00:19:26.155 } 00:19:26.155 } 00:19:26.155 ] 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "keyring", 00:19:26.155 "config": [] 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "iobuf", 00:19:26.155 "config": [ 00:19:26.155 { 00:19:26.155 "method": "iobuf_set_options", 00:19:26.155 "params": { 00:19:26.155 "small_pool_count": 8192, 00:19:26.155 "large_pool_count": 1024, 00:19:26.155 "small_bufsize": 8192, 00:19:26.155 "large_bufsize": 135168, 00:19:26.155 "enable_numa": false 00:19:26.155 } 00:19:26.155 } 00:19:26.155 ] 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "sock", 00:19:26.155 "config": [ 00:19:26.155 { 00:19:26.155 "method": "sock_set_default_impl", 00:19:26.155 "params": { 00:19:26.155 "impl_name": "posix" 00:19:26.155 } 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "method": "sock_impl_set_options", 00:19:26.155 "params": { 00:19:26.155 "impl_name": "ssl", 00:19:26.155 "recv_buf_size": 4096, 00:19:26.155 "send_buf_size": 4096, 00:19:26.155 "enable_recv_pipe": true, 00:19:26.155 "enable_quickack": false, 00:19:26.155 "enable_placement_id": 0, 00:19:26.155 "enable_zerocopy_send_server": true, 00:19:26.155 "enable_zerocopy_send_client": false, 00:19:26.155 "zerocopy_threshold": 0, 00:19:26.155 "tls_version": 0, 00:19:26.155 "enable_ktls": false 00:19:26.155 } 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "method": "sock_impl_set_options", 00:19:26.155 "params": { 00:19:26.155 "impl_name": "posix", 00:19:26.155 "recv_buf_size": 2097152, 00:19:26.155 "send_buf_size": 2097152, 00:19:26.155 "enable_recv_pipe": true, 00:19:26.155 "enable_quickack": false, 00:19:26.155 "enable_placement_id": 0, 00:19:26.155 "enable_zerocopy_send_server": true, 00:19:26.155 "enable_zerocopy_send_client": false, 00:19:26.155 "zerocopy_threshold": 0, 00:19:26.155 "tls_version": 0, 00:19:26.155 "enable_ktls": false 00:19:26.155 } 00:19:26.155 } 00:19:26.155 ] 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "vmd", 00:19:26.155 "config": [] 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "accel", 00:19:26.155 "config": [ 00:19:26.155 { 00:19:26.155 "method": "accel_set_options", 00:19:26.155 "params": { 00:19:26.155 "small_cache_size": 128, 00:19:26.155 "large_cache_size": 16, 00:19:26.155 "task_count": 2048, 00:19:26.155 "sequence_count": 2048, 00:19:26.155 "buf_count": 2048 00:19:26.155 } 00:19:26.155 } 00:19:26.155 ] 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "bdev", 00:19:26.155 "config": [ 00:19:26.155 { 00:19:26.155 "method": "bdev_set_options", 00:19:26.155 "params": { 00:19:26.155 "bdev_io_pool_size": 65535, 00:19:26.155 "bdev_io_cache_size": 256, 00:19:26.155 "bdev_auto_examine": true, 00:19:26.155 "iobuf_small_cache_size": 128, 00:19:26.155 "iobuf_large_cache_size": 16 00:19:26.155 } 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "method": "bdev_raid_set_options", 00:19:26.155 "params": { 00:19:26.155 "process_window_size_kb": 1024, 00:19:26.155 "process_max_bandwidth_mb_sec": 0 00:19:26.155 } 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "method": "bdev_iscsi_set_options", 00:19:26.155 "params": { 00:19:26.155 "timeout_sec": 30 00:19:26.155 } 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "method": "bdev_nvme_set_options", 00:19:26.155 "params": { 00:19:26.155 "action_on_timeout": "none", 00:19:26.155 "timeout_us": 0, 00:19:26.155 "timeout_admin_us": 0, 00:19:26.155 "keep_alive_timeout_ms": 10000, 00:19:26.155 "arbitration_burst": 0, 00:19:26.155 "low_priority_weight": 0, 00:19:26.155 "medium_priority_weight": 0, 00:19:26.155 "high_priority_weight": 0, 00:19:26.155 "nvme_adminq_poll_period_us": 10000, 00:19:26.155 "nvme_ioq_poll_period_us": 0, 00:19:26.155 "io_queue_requests": 0, 00:19:26.155 "delay_cmd_submit": true, 00:19:26.155 "transport_retry_count": 4, 00:19:26.155 "bdev_retry_count": 3, 00:19:26.155 "transport_ack_timeout": 0, 00:19:26.155 "ctrlr_loss_timeout_sec": 0, 00:19:26.155 "reconnect_delay_sec": 0, 00:19:26.155 "fast_io_fail_timeout_sec": 0, 00:19:26.155 "disable_auto_failback": false, 00:19:26.155 "generate_uuids": false, 00:19:26.155 "transport_tos": 0, 00:19:26.155 "nvme_error_stat": false, 00:19:26.155 "rdma_srq_size": 0, 00:19:26.155 "io_path_stat": false, 00:19:26.155 "allow_accel_sequence": false, 00:19:26.155 "rdma_max_cq_size": 0, 00:19:26.155 "rdma_cm_event_timeout_ms": 0, 00:19:26.155 "dhchap_digests": [ 00:19:26.155 "sha256", 00:19:26.155 "sha384", 00:19:26.155 "sha512" 00:19:26.155 ], 00:19:26.155 "dhchap_dhgroups": [ 00:19:26.155 "null", 00:19:26.155 "ffdhe2048", 00:19:26.155 "ffdhe3072", 00:19:26.155 "ffdhe4096", 00:19:26.155 "ffdhe6144", 00:19:26.155 "ffdhe8192" 00:19:26.155 ] 00:19:26.155 } 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "method": "bdev_nvme_set_hotplug", 00:19:26.155 "params": { 00:19:26.155 "period_us": 100000, 00:19:26.155 "enable": false 00:19:26.155 } 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "method": "bdev_wait_for_examine" 00:19:26.155 } 00:19:26.155 ] 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "scsi", 00:19:26.155 "config": null 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "scheduler", 00:19:26.155 "config": [ 00:19:26.155 { 00:19:26.155 "method": "framework_set_scheduler", 00:19:26.155 "params": { 00:19:26.155 "name": "static" 00:19:26.155 } 00:19:26.155 } 00:19:26.155 ] 00:19:26.155 }, 00:19:26.155 { 00:19:26.155 "subsystem": "vhost_scsi", 00:19:26.155 "config": [] 00:19:26.155 }, 00:19:26.156 { 00:19:26.156 "subsystem": "vhost_blk", 00:19:26.156 "config": [] 00:19:26.156 }, 00:19:26.156 { 00:19:26.156 "subsystem": "ublk", 00:19:26.156 "config": [] 00:19:26.156 }, 00:19:26.156 { 00:19:26.156 "subsystem": "nbd", 00:19:26.156 "config": [] 00:19:26.156 }, 00:19:26.156 { 00:19:26.156 "subsystem": "nvmf", 00:19:26.156 "config": [ 00:19:26.156 { 00:19:26.156 "method": "nvmf_set_config", 00:19:26.156 "params": { 00:19:26.156 "discovery_filter": "match_any", 00:19:26.156 "admin_cmd_passthru": { 00:19:26.156 "identify_ctrlr": false 00:19:26.156 }, 00:19:26.156 "dhchap_digests": [ 00:19:26.156 "sha256", 00:19:26.156 "sha384", 00:19:26.156 "sha512" 00:19:26.156 ], 00:19:26.156 "dhchap_dhgroups": [ 00:19:26.156 "null", 00:19:26.156 "ffdhe2048", 00:19:26.156 "ffdhe3072", 00:19:26.156 "ffdhe4096", 00:19:26.156 "ffdhe6144", 00:19:26.156 "ffdhe8192" 00:19:26.156 ] 00:19:26.156 } 00:19:26.156 }, 00:19:26.156 { 00:19:26.156 "method": "nvmf_set_max_subsystems", 00:19:26.156 "params": { 00:19:26.156 "max_subsystems": 1024 00:19:26.156 } 00:19:26.156 }, 00:19:26.156 { 00:19:26.156 "method": "nvmf_set_crdt", 00:19:26.156 "params": { 00:19:26.156 "crdt1": 0, 00:19:26.156 "crdt2": 0, 00:19:26.156 "crdt3": 0 00:19:26.156 } 00:19:26.156 }, 00:19:26.156 { 00:19:26.156 "method": "nvmf_create_transport", 00:19:26.156 "params": { 00:19:26.156 "trtype": "TCP", 00:19:26.156 "max_queue_depth": 128, 00:19:26.156 "max_io_qpairs_per_ctrlr": 127, 00:19:26.156 "in_capsule_data_size": 4096, 00:19:26.156 "max_io_size": 131072, 00:19:26.156 "io_unit_size": 131072, 00:19:26.156 "max_aq_depth": 128, 00:19:26.156 "num_shared_buffers": 511, 00:19:26.156 "buf_cache_size": 4294967295, 00:19:26.156 "dif_insert_or_strip": false, 00:19:26.156 "zcopy": false, 00:19:26.156 "c2h_success": true, 00:19:26.156 "sock_priority": 0, 00:19:26.156 "abort_timeout_sec": 1, 00:19:26.156 "ack_timeout": 0, 00:19:26.156 "data_wr_pool_size": 0 00:19:26.156 } 00:19:26.156 } 00:19:26.156 ] 00:19:26.156 }, 00:19:26.156 { 00:19:26.156 "subsystem": "iscsi", 00:19:26.156 "config": [ 00:19:26.156 { 00:19:26.156 "method": "iscsi_set_options", 00:19:26.156 "params": { 00:19:26.156 "node_base": "iqn.2016-06.io.spdk", 00:19:26.156 "max_sessions": 128, 00:19:26.156 "max_connections_per_session": 2, 00:19:26.156 "max_queue_depth": 64, 00:19:26.156 "default_time2wait": 2, 00:19:26.156 "default_time2retain": 20, 00:19:26.156 "first_burst_length": 8192, 00:19:26.156 "immediate_data": true, 00:19:26.156 "allow_duplicated_isid": false, 00:19:26.156 "error_recovery_level": 0, 00:19:26.156 "nop_timeout": 60, 00:19:26.156 "nop_in_interval": 30, 00:19:26.156 "disable_chap": false, 00:19:26.156 "require_chap": false, 00:19:26.156 "mutual_chap": false, 00:19:26.156 "chap_group": 0, 00:19:26.156 "max_large_datain_per_connection": 64, 00:19:26.156 "max_r2t_per_connection": 4, 00:19:26.156 "pdu_pool_size": 36864, 00:19:26.156 "immediate_data_pool_size": 16384, 00:19:26.156 "data_out_pool_size": 2048 00:19:26.156 } 00:19:26.156 } 00:19:26.156 ] 00:19:26.156 } 00:19:26.156 ] 00:19:26.156 } 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58589 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58589 ']' 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58589 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58589 00:19:26.156 killing process with pid 58589 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58589' 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58589 00:19:26.156 07:16:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58589 00:19:28.689 07:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:19:28.689 07:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58645 00:19:28.689 07:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58645 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58645 ']' 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58645 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58645 00:19:33.978 killing process with pid 58645 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58645' 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58645 00:19:33.978 07:16:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58645 00:19:36.507 07:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:19:36.507 07:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:19:36.507 ************************************ 00:19:36.507 END TEST skip_rpc_with_json 00:19:36.507 ************************************ 00:19:36.507 00:19:36.507 real 0m12.078s 00:19:36.507 user 0m11.507s 00:19:36.507 sys 0m1.021s 00:19:36.507 07:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.507 07:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:19:36.765 07:17:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:19:36.765 07:17:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:36.765 07:17:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.765 07:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:36.765 ************************************ 00:19:36.765 START TEST skip_rpc_with_delay 00:19:36.765 ************************************ 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:19:36.765 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:19:36.765 [2024-11-20 07:17:00.896260] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:19:37.024 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:19:37.024 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:37.024 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:37.024 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:37.024 00:19:37.024 real 0m0.238s 00:19:37.024 user 0m0.124s 00:19:37.024 sys 0m0.108s 00:19:37.024 ************************************ 00:19:37.024 END TEST skip_rpc_with_delay 00:19:37.024 ************************************ 00:19:37.024 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.024 07:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:19:37.024 07:17:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:19:37.024 07:17:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:19:37.024 07:17:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:19:37.024 07:17:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:37.024 07:17:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.024 07:17:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:37.024 ************************************ 00:19:37.024 START TEST exit_on_failed_rpc_init 00:19:37.024 ************************************ 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58784 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58784 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58784 ']' 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.024 07:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:19:37.024 [2024-11-20 07:17:01.199373] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:37.024 [2024-11-20 07:17:01.199794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58784 ] 00:19:37.281 [2024-11-20 07:17:01.391719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.540 [2024-11-20 07:17:01.520284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:19:38.512 07:17:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:19:38.770 [2024-11-20 07:17:02.725760] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:38.771 [2024-11-20 07:17:02.726156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58813 ] 00:19:38.771 [2024-11-20 07:17:02.909224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.029 [2024-11-20 07:17:03.055520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.029 [2024-11-20 07:17:03.055660] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:39.029 [2024-11-20 07:17:03.055680] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:39.029 [2024-11-20 07:17:03.055709] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58784 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58784 ']' 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58784 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58784 00:19:39.287 killing process with pid 58784 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58784' 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58784 00:19:39.287 07:17:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58784 00:19:42.572 00:19:42.572 real 0m5.117s 00:19:42.572 user 0m5.514s 00:19:42.572 sys 0m0.723s 00:19:42.572 07:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.572 ************************************ 00:19:42.572 END TEST exit_on_failed_rpc_init 00:19:42.572 ************************************ 00:19:42.572 07:17:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:19:42.572 07:17:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:19:42.572 00:19:42.572 real 0m25.665s 00:19:42.572 user 0m24.368s 00:19:42.572 sys 0m2.760s 00:19:42.572 07:17:06 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.572 ************************************ 00:19:42.572 END TEST skip_rpc 00:19:42.572 ************************************ 00:19:42.572 07:17:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:42.572 07:17:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:19:42.572 07:17:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.572 07:17:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.572 07:17:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.572 ************************************ 00:19:42.572 START TEST rpc_client 00:19:42.572 ************************************ 00:19:42.572 07:17:06 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:19:42.572 * Looking for test storage... 00:19:42.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:19:42.572 07:17:06 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:42.572 07:17:06 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:42.572 07:17:06 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:19:42.572 07:17:06 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:19:42.572 07:17:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.573 07:17:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.573 07:17:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.573 07:17:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:19:42.573 07:17:06 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.573 07:17:06 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:42.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.573 --rc genhtml_branch_coverage=1 00:19:42.573 --rc genhtml_function_coverage=1 00:19:42.573 --rc genhtml_legend=1 00:19:42.573 --rc geninfo_all_blocks=1 00:19:42.573 --rc geninfo_unexecuted_blocks=1 00:19:42.573 00:19:42.573 ' 00:19:42.573 07:17:06 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:42.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.573 --rc genhtml_branch_coverage=1 00:19:42.573 --rc genhtml_function_coverage=1 00:19:42.573 --rc genhtml_legend=1 00:19:42.573 --rc geninfo_all_blocks=1 00:19:42.573 --rc geninfo_unexecuted_blocks=1 00:19:42.573 00:19:42.573 ' 00:19:42.573 07:17:06 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:42.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.573 --rc genhtml_branch_coverage=1 00:19:42.573 --rc genhtml_function_coverage=1 00:19:42.573 --rc genhtml_legend=1 00:19:42.573 --rc geninfo_all_blocks=1 00:19:42.573 --rc geninfo_unexecuted_blocks=1 00:19:42.573 00:19:42.573 ' 00:19:42.573 07:17:06 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:42.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.573 --rc genhtml_branch_coverage=1 00:19:42.573 --rc genhtml_function_coverage=1 00:19:42.573 --rc genhtml_legend=1 00:19:42.573 --rc geninfo_all_blocks=1 00:19:42.573 --rc geninfo_unexecuted_blocks=1 00:19:42.573 00:19:42.573 ' 00:19:42.573 07:17:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:19:42.573 OK 00:19:42.573 07:17:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:19:42.573 00:19:42.573 real 0m0.295s 00:19:42.573 user 0m0.165s 00:19:42.573 sys 0m0.139s 00:19:42.573 07:17:06 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.573 07:17:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:19:42.573 ************************************ 00:19:42.573 END TEST rpc_client 00:19:42.573 ************************************ 00:19:42.573 07:17:06 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:19:42.573 07:17:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.573 07:17:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.573 07:17:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.573 ************************************ 00:19:42.573 START TEST json_config 00:19:42.573 ************************************ 00:19:42.573 07:17:06 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:19:42.573 07:17:06 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:42.573 07:17:06 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:19:42.573 07:17:06 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:42.833 07:17:06 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:42.833 07:17:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.833 07:17:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.833 07:17:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.833 07:17:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.833 07:17:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.833 07:17:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.833 07:17:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.833 07:17:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.833 07:17:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.833 07:17:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.833 07:17:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.833 07:17:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:19:42.833 07:17:06 json_config -- scripts/common.sh@345 -- # : 1 00:19:42.833 07:17:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.833 07:17:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.833 07:17:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:19:42.833 07:17:06 json_config -- scripts/common.sh@353 -- # local d=1 00:19:42.833 07:17:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.833 07:17:06 json_config -- scripts/common.sh@355 -- # echo 1 00:19:42.833 07:17:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.833 07:17:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:19:42.833 07:17:06 json_config -- scripts/common.sh@353 -- # local d=2 00:19:42.833 07:17:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.833 07:17:06 json_config -- scripts/common.sh@355 -- # echo 2 00:19:42.833 07:17:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.833 07:17:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.833 07:17:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.833 07:17:06 json_config -- scripts/common.sh@368 -- # return 0 00:19:42.833 07:17:06 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.833 07:17:06 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:42.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.833 --rc genhtml_branch_coverage=1 00:19:42.833 --rc genhtml_function_coverage=1 00:19:42.833 --rc genhtml_legend=1 00:19:42.833 --rc geninfo_all_blocks=1 00:19:42.833 --rc geninfo_unexecuted_blocks=1 00:19:42.833 00:19:42.833 ' 00:19:42.833 07:17:06 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:42.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.833 --rc genhtml_branch_coverage=1 00:19:42.833 --rc genhtml_function_coverage=1 00:19:42.833 --rc genhtml_legend=1 00:19:42.833 --rc geninfo_all_blocks=1 00:19:42.833 --rc geninfo_unexecuted_blocks=1 00:19:42.833 00:19:42.833 ' 00:19:42.833 07:17:06 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:42.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.833 --rc genhtml_branch_coverage=1 00:19:42.833 --rc genhtml_function_coverage=1 00:19:42.833 --rc genhtml_legend=1 00:19:42.833 --rc geninfo_all_blocks=1 00:19:42.833 --rc geninfo_unexecuted_blocks=1 00:19:42.833 00:19:42.833 ' 00:19:42.833 07:17:06 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:42.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.833 --rc genhtml_branch_coverage=1 00:19:42.833 --rc genhtml_function_coverage=1 00:19:42.833 --rc genhtml_legend=1 00:19:42.833 --rc geninfo_all_blocks=1 00:19:42.833 --rc geninfo_unexecuted_blocks=1 00:19:42.833 00:19:42.833 ' 00:19:42.833 07:17:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:300399fd-40ba-4a3f-8d5e-751087a81d1d 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=300399fd-40ba-4a3f-8d5e-751087a81d1d 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.833 07:17:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:19:42.833 07:17:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.833 07:17:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.833 07:17:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.833 07:17:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.833 07:17:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.833 07:17:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.833 07:17:06 json_config -- paths/export.sh@5 -- # export PATH 00:19:42.833 07:17:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:42.833 07:17:06 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:42.833 07:17:06 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:42.833 07:17:06 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@50 -- # : 0 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:42.833 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:42.833 07:17:06 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:42.833 07:17:06 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:19:42.833 07:17:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:19:42.833 07:17:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:19:42.833 07:17:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:19:42.833 07:17:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:19:42.833 07:17:06 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:19:42.833 WARNING: No tests are enabled so not running JSON configuration tests 00:19:42.834 07:17:06 json_config -- json_config/json_config.sh@28 -- # exit 0 00:19:42.834 00:19:42.834 real 0m0.205s 00:19:42.834 user 0m0.123s 00:19:42.834 sys 0m0.086s 00:19:42.834 07:17:06 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.834 07:17:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:19:42.834 ************************************ 00:19:42.834 END TEST json_config 00:19:42.834 ************************************ 00:19:42.834 07:17:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:19:42.834 07:17:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:42.834 07:17:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.834 07:17:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.834 ************************************ 00:19:42.834 START TEST json_config_extra_key 00:19:42.834 ************************************ 00:19:42.834 07:17:06 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:19:42.834 07:17:06 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:42.834 07:17:06 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:42.834 07:17:06 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:19:43.093 07:17:07 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:43.093 07:17:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:19:43.093 07:17:07 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:43.093 07:17:07 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:43.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.093 --rc genhtml_branch_coverage=1 00:19:43.093 --rc genhtml_function_coverage=1 00:19:43.093 --rc genhtml_legend=1 00:19:43.093 --rc geninfo_all_blocks=1 00:19:43.093 --rc geninfo_unexecuted_blocks=1 00:19:43.093 00:19:43.093 ' 00:19:43.093 07:17:07 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:43.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.094 --rc genhtml_branch_coverage=1 00:19:43.094 --rc genhtml_function_coverage=1 00:19:43.094 --rc genhtml_legend=1 00:19:43.094 --rc geninfo_all_blocks=1 00:19:43.094 --rc geninfo_unexecuted_blocks=1 00:19:43.094 00:19:43.094 ' 00:19:43.094 07:17:07 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:43.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.094 --rc genhtml_branch_coverage=1 00:19:43.094 --rc genhtml_function_coverage=1 00:19:43.094 --rc genhtml_legend=1 00:19:43.094 --rc geninfo_all_blocks=1 00:19:43.094 --rc geninfo_unexecuted_blocks=1 00:19:43.094 00:19:43.094 ' 00:19:43.094 07:17:07 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:43.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:43.094 --rc genhtml_branch_coverage=1 00:19:43.094 --rc genhtml_function_coverage=1 00:19:43.094 --rc genhtml_legend=1 00:19:43.094 --rc geninfo_all_blocks=1 00:19:43.094 --rc geninfo_unexecuted_blocks=1 00:19:43.094 00:19:43.094 ' 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:300399fd-40ba-4a3f-8d5e-751087a81d1d 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=300399fd-40ba-4a3f-8d5e-751087a81d1d 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:43.094 07:17:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:19:43.094 07:17:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.094 07:17:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.094 07:17:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.094 07:17:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.094 07:17:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.094 07:17:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.094 07:17:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:19:43.094 07:17:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:43.094 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:43.094 07:17:07 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:19:43.094 INFO: launching applications... 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:19:43.094 07:17:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59023 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:19:43.094 Waiting for target to run... 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59023 /var/tmp/spdk_tgt.sock 00:19:43.094 07:17:07 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59023 ']' 00:19:43.094 07:17:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:19:43.094 07:17:07 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:19:43.094 07:17:07 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.094 07:17:07 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:19:43.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:19:43.094 07:17:07 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.094 07:17:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:19:43.094 [2024-11-20 07:17:07.292706] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:43.094 [2024-11-20 07:17:07.293212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59023 ] 00:19:44.031 [2024-11-20 07:17:07.904116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.031 [2024-11-20 07:17:08.064660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.041 00:19:45.041 INFO: shutting down applications... 00:19:45.041 07:17:08 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.041 07:17:08 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:19:45.041 07:17:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:19:45.041 07:17:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59023 ]] 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59023 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59023 00:19:45.041 07:17:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:45.354 07:17:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:45.354 07:17:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:45.354 07:17:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59023 00:19:45.354 07:17:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:45.921 07:17:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:45.921 07:17:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:45.921 07:17:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59023 00:19:45.921 07:17:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:46.487 07:17:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:46.487 07:17:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:46.487 07:17:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59023 00:19:46.487 07:17:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:46.746 07:17:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:46.746 07:17:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:46.746 07:17:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59023 00:19:46.746 07:17:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:47.315 07:17:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:47.315 07:17:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:47.315 07:17:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59023 00:19:47.315 07:17:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:47.881 07:17:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:47.881 07:17:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:47.881 07:17:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59023 00:19:47.881 07:17:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:19:48.448 07:17:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:19:48.448 07:17:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:19:48.448 07:17:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59023 00:19:48.448 07:17:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:19:48.448 SPDK target shutdown done 00:19:48.448 07:17:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:19:48.448 07:17:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:19:48.448 07:17:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:19:48.448 Success 00:19:48.448 07:17:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:19:48.448 00:19:48.448 real 0m5.534s 00:19:48.448 user 0m4.683s 00:19:48.448 sys 0m0.890s 00:19:48.448 07:17:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.448 07:17:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:19:48.448 ************************************ 00:19:48.448 END TEST json_config_extra_key 00:19:48.448 ************************************ 00:19:48.448 07:17:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:19:48.448 07:17:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.448 07:17:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.448 07:17:12 -- common/autotest_common.sh@10 -- # set +x 00:19:48.448 ************************************ 00:19:48.448 START TEST alias_rpc 00:19:48.448 ************************************ 00:19:48.448 07:17:12 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:19:48.448 * Looking for test storage... 00:19:48.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:19:48.448 07:17:12 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:48.448 07:17:12 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:48.448 07:17:12 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:48.705 07:17:12 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:48.705 07:17:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.705 07:17:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.705 07:17:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.705 07:17:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.705 07:17:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.706 07:17:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:48.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.706 --rc genhtml_branch_coverage=1 00:19:48.706 --rc genhtml_function_coverage=1 00:19:48.706 --rc genhtml_legend=1 00:19:48.706 --rc geninfo_all_blocks=1 00:19:48.706 --rc geninfo_unexecuted_blocks=1 00:19:48.706 00:19:48.706 ' 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:48.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.706 --rc genhtml_branch_coverage=1 00:19:48.706 --rc genhtml_function_coverage=1 00:19:48.706 --rc genhtml_legend=1 00:19:48.706 --rc geninfo_all_blocks=1 00:19:48.706 --rc geninfo_unexecuted_blocks=1 00:19:48.706 00:19:48.706 ' 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:48.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.706 --rc genhtml_branch_coverage=1 00:19:48.706 --rc genhtml_function_coverage=1 00:19:48.706 --rc genhtml_legend=1 00:19:48.706 --rc geninfo_all_blocks=1 00:19:48.706 --rc geninfo_unexecuted_blocks=1 00:19:48.706 00:19:48.706 ' 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:48.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.706 --rc genhtml_branch_coverage=1 00:19:48.706 --rc genhtml_function_coverage=1 00:19:48.706 --rc genhtml_legend=1 00:19:48.706 --rc geninfo_all_blocks=1 00:19:48.706 --rc geninfo_unexecuted_blocks=1 00:19:48.706 00:19:48.706 ' 00:19:48.706 07:17:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:19:48.706 07:17:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59147 00:19:48.706 07:17:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59147 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59147 ']' 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.706 07:17:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:48.706 07:17:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:48.706 [2024-11-20 07:17:12.860008] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:48.706 [2024-11-20 07:17:12.860610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59147 ] 00:19:48.963 [2024-11-20 07:17:13.041160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.222 [2024-11-20 07:17:13.165780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.155 07:17:14 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.156 07:17:14 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:50.156 07:17:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:19:50.156 07:17:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59147 00:19:50.156 07:17:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59147 ']' 00:19:50.156 07:17:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59147 00:19:50.156 07:17:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:19:50.156 07:17:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.156 07:17:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59147 00:19:50.415 killing process with pid 59147 00:19:50.415 07:17:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.415 07:17:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.415 07:17:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59147' 00:19:50.415 07:17:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 59147 00:19:50.415 07:17:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 59147 00:19:53.697 ************************************ 00:19:53.697 END TEST alias_rpc 00:19:53.697 ************************************ 00:19:53.697 00:19:53.697 real 0m4.720s 00:19:53.697 user 0m4.747s 00:19:53.697 sys 0m0.623s 00:19:53.697 07:17:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.697 07:17:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:53.697 07:17:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:19:53.697 07:17:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:19:53.697 07:17:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:53.697 07:17:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.697 07:17:17 -- common/autotest_common.sh@10 -- # set +x 00:19:53.697 ************************************ 00:19:53.697 START TEST spdkcli_tcp 00:19:53.697 ************************************ 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:19:53.697 * Looking for test storage... 00:19:53.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.697 07:17:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.697 --rc genhtml_branch_coverage=1 00:19:53.697 --rc genhtml_function_coverage=1 00:19:53.697 --rc genhtml_legend=1 00:19:53.697 --rc geninfo_all_blocks=1 00:19:53.697 --rc geninfo_unexecuted_blocks=1 00:19:53.697 00:19:53.697 ' 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.697 --rc genhtml_branch_coverage=1 00:19:53.697 --rc genhtml_function_coverage=1 00:19:53.697 --rc genhtml_legend=1 00:19:53.697 --rc geninfo_all_blocks=1 00:19:53.697 --rc geninfo_unexecuted_blocks=1 00:19:53.697 00:19:53.697 ' 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.697 --rc genhtml_branch_coverage=1 00:19:53.697 --rc genhtml_function_coverage=1 00:19:53.697 --rc genhtml_legend=1 00:19:53.697 --rc geninfo_all_blocks=1 00:19:53.697 --rc geninfo_unexecuted_blocks=1 00:19:53.697 00:19:53.697 ' 00:19:53.697 07:17:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:53.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.697 --rc genhtml_branch_coverage=1 00:19:53.697 --rc genhtml_function_coverage=1 00:19:53.697 --rc genhtml_legend=1 00:19:53.697 --rc geninfo_all_blocks=1 00:19:53.697 --rc geninfo_unexecuted_blocks=1 00:19:53.698 00:19:53.698 ' 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:19:53.698 07:17:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.698 07:17:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59261 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59261 00:19:53.698 07:17:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:53.698 07:17:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59261 ']' 00:19:53.698 07:17:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.698 07:17:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.698 07:17:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.698 07:17:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.698 07:17:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:53.698 [2024-11-20 07:17:17.653162] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:53.698 [2024-11-20 07:17:17.653561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59261 ] 00:19:53.698 [2024-11-20 07:17:17.863040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:53.957 [2024-11-20 07:17:18.040144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.957 [2024-11-20 07:17:18.040181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.893 07:17:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.893 07:17:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:19:54.893 07:17:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:19:54.893 07:17:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59284 00:19:54.893 07:17:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:19:55.152 [ 00:19:55.152 "bdev_malloc_delete", 00:19:55.152 "bdev_malloc_create", 00:19:55.152 "bdev_null_resize", 00:19:55.152 "bdev_null_delete", 00:19:55.152 "bdev_null_create", 00:19:55.152 "bdev_nvme_cuse_unregister", 00:19:55.152 "bdev_nvme_cuse_register", 00:19:55.152 "bdev_opal_new_user", 00:19:55.152 "bdev_opal_set_lock_state", 00:19:55.152 "bdev_opal_delete", 00:19:55.152 "bdev_opal_get_info", 00:19:55.152 "bdev_opal_create", 00:19:55.152 "bdev_nvme_opal_revert", 00:19:55.152 "bdev_nvme_opal_init", 00:19:55.152 "bdev_nvme_send_cmd", 00:19:55.152 "bdev_nvme_set_keys", 00:19:55.152 "bdev_nvme_get_path_iostat", 00:19:55.152 "bdev_nvme_get_mdns_discovery_info", 00:19:55.152 "bdev_nvme_stop_mdns_discovery", 00:19:55.152 "bdev_nvme_start_mdns_discovery", 00:19:55.152 "bdev_nvme_set_multipath_policy", 00:19:55.152 "bdev_nvme_set_preferred_path", 00:19:55.152 "bdev_nvme_get_io_paths", 00:19:55.152 "bdev_nvme_remove_error_injection", 00:19:55.152 "bdev_nvme_add_error_injection", 00:19:55.152 "bdev_nvme_get_discovery_info", 00:19:55.152 "bdev_nvme_stop_discovery", 00:19:55.152 "bdev_nvme_start_discovery", 00:19:55.152 "bdev_nvme_get_controller_health_info", 00:19:55.152 "bdev_nvme_disable_controller", 00:19:55.152 "bdev_nvme_enable_controller", 00:19:55.152 "bdev_nvme_reset_controller", 00:19:55.152 "bdev_nvme_get_transport_statistics", 00:19:55.152 "bdev_nvme_apply_firmware", 00:19:55.152 "bdev_nvme_detach_controller", 00:19:55.152 "bdev_nvme_get_controllers", 00:19:55.152 "bdev_nvme_attach_controller", 00:19:55.152 "bdev_nvme_set_hotplug", 00:19:55.152 "bdev_nvme_set_options", 00:19:55.152 "bdev_passthru_delete", 00:19:55.152 "bdev_passthru_create", 00:19:55.152 "bdev_lvol_set_parent_bdev", 00:19:55.152 "bdev_lvol_set_parent", 00:19:55.152 "bdev_lvol_check_shallow_copy", 00:19:55.152 "bdev_lvol_start_shallow_copy", 00:19:55.152 "bdev_lvol_grow_lvstore", 00:19:55.152 "bdev_lvol_get_lvols", 00:19:55.152 "bdev_lvol_get_lvstores", 00:19:55.152 "bdev_lvol_delete", 00:19:55.152 "bdev_lvol_set_read_only", 00:19:55.152 "bdev_lvol_resize", 00:19:55.152 "bdev_lvol_decouple_parent", 00:19:55.152 "bdev_lvol_inflate", 00:19:55.152 "bdev_lvol_rename", 00:19:55.152 "bdev_lvol_clone_bdev", 00:19:55.152 "bdev_lvol_clone", 00:19:55.152 "bdev_lvol_snapshot", 00:19:55.152 "bdev_lvol_create", 00:19:55.152 "bdev_lvol_delete_lvstore", 00:19:55.152 "bdev_lvol_rename_lvstore", 00:19:55.152 "bdev_lvol_create_lvstore", 00:19:55.152 "bdev_raid_set_options", 00:19:55.152 "bdev_raid_remove_base_bdev", 00:19:55.152 "bdev_raid_add_base_bdev", 00:19:55.152 "bdev_raid_delete", 00:19:55.152 "bdev_raid_create", 00:19:55.152 "bdev_raid_get_bdevs", 00:19:55.152 "bdev_error_inject_error", 00:19:55.152 "bdev_error_delete", 00:19:55.152 "bdev_error_create", 00:19:55.152 "bdev_split_delete", 00:19:55.152 "bdev_split_create", 00:19:55.152 "bdev_delay_delete", 00:19:55.152 "bdev_delay_create", 00:19:55.152 "bdev_delay_update_latency", 00:19:55.152 "bdev_zone_block_delete", 00:19:55.152 "bdev_zone_block_create", 00:19:55.152 "blobfs_create", 00:19:55.152 "blobfs_detect", 00:19:55.152 "blobfs_set_cache_size", 00:19:55.152 "bdev_xnvme_delete", 00:19:55.152 "bdev_xnvme_create", 00:19:55.152 "bdev_aio_delete", 00:19:55.152 "bdev_aio_rescan", 00:19:55.152 "bdev_aio_create", 00:19:55.152 "bdev_ftl_set_property", 00:19:55.152 "bdev_ftl_get_properties", 00:19:55.152 "bdev_ftl_get_stats", 00:19:55.152 "bdev_ftl_unmap", 00:19:55.152 "bdev_ftl_unload", 00:19:55.152 "bdev_ftl_delete", 00:19:55.152 "bdev_ftl_load", 00:19:55.152 "bdev_ftl_create", 00:19:55.152 "bdev_virtio_attach_controller", 00:19:55.152 "bdev_virtio_scsi_get_devices", 00:19:55.152 "bdev_virtio_detach_controller", 00:19:55.152 "bdev_virtio_blk_set_hotplug", 00:19:55.152 "bdev_iscsi_delete", 00:19:55.152 "bdev_iscsi_create", 00:19:55.152 "bdev_iscsi_set_options", 00:19:55.152 "accel_error_inject_error", 00:19:55.152 "ioat_scan_accel_module", 00:19:55.152 "dsa_scan_accel_module", 00:19:55.152 "iaa_scan_accel_module", 00:19:55.152 "keyring_file_remove_key", 00:19:55.152 "keyring_file_add_key", 00:19:55.152 "keyring_linux_set_options", 00:19:55.152 "fsdev_aio_delete", 00:19:55.152 "fsdev_aio_create", 00:19:55.152 "iscsi_get_histogram", 00:19:55.152 "iscsi_enable_histogram", 00:19:55.152 "iscsi_set_options", 00:19:55.152 "iscsi_get_auth_groups", 00:19:55.152 "iscsi_auth_group_remove_secret", 00:19:55.152 "iscsi_auth_group_add_secret", 00:19:55.152 "iscsi_delete_auth_group", 00:19:55.152 "iscsi_create_auth_group", 00:19:55.152 "iscsi_set_discovery_auth", 00:19:55.152 "iscsi_get_options", 00:19:55.152 "iscsi_target_node_request_logout", 00:19:55.152 "iscsi_target_node_set_redirect", 00:19:55.152 "iscsi_target_node_set_auth", 00:19:55.152 "iscsi_target_node_add_lun", 00:19:55.152 "iscsi_get_stats", 00:19:55.152 "iscsi_get_connections", 00:19:55.152 "iscsi_portal_group_set_auth", 00:19:55.152 "iscsi_start_portal_group", 00:19:55.152 "iscsi_delete_portal_group", 00:19:55.152 "iscsi_create_portal_group", 00:19:55.152 "iscsi_get_portal_groups", 00:19:55.152 "iscsi_delete_target_node", 00:19:55.152 "iscsi_target_node_remove_pg_ig_maps", 00:19:55.152 "iscsi_target_node_add_pg_ig_maps", 00:19:55.152 "iscsi_create_target_node", 00:19:55.152 "iscsi_get_target_nodes", 00:19:55.152 "iscsi_delete_initiator_group", 00:19:55.152 "iscsi_initiator_group_remove_initiators", 00:19:55.152 "iscsi_initiator_group_add_initiators", 00:19:55.152 "iscsi_create_initiator_group", 00:19:55.152 "iscsi_get_initiator_groups", 00:19:55.152 "nvmf_set_crdt", 00:19:55.152 "nvmf_set_config", 00:19:55.152 "nvmf_set_max_subsystems", 00:19:55.152 "nvmf_stop_mdns_prr", 00:19:55.152 "nvmf_publish_mdns_prr", 00:19:55.152 "nvmf_subsystem_get_listeners", 00:19:55.152 "nvmf_subsystem_get_qpairs", 00:19:55.152 "nvmf_subsystem_get_controllers", 00:19:55.152 "nvmf_get_stats", 00:19:55.152 "nvmf_get_transports", 00:19:55.152 "nvmf_create_transport", 00:19:55.152 "nvmf_get_targets", 00:19:55.152 "nvmf_delete_target", 00:19:55.152 "nvmf_create_target", 00:19:55.152 "nvmf_subsystem_allow_any_host", 00:19:55.152 "nvmf_subsystem_set_keys", 00:19:55.152 "nvmf_subsystem_remove_host", 00:19:55.152 "nvmf_subsystem_add_host", 00:19:55.152 "nvmf_ns_remove_host", 00:19:55.152 "nvmf_ns_add_host", 00:19:55.152 "nvmf_subsystem_remove_ns", 00:19:55.152 "nvmf_subsystem_set_ns_ana_group", 00:19:55.153 "nvmf_subsystem_add_ns", 00:19:55.153 "nvmf_subsystem_listener_set_ana_state", 00:19:55.153 "nvmf_discovery_get_referrals", 00:19:55.153 "nvmf_discovery_remove_referral", 00:19:55.153 "nvmf_discovery_add_referral", 00:19:55.153 "nvmf_subsystem_remove_listener", 00:19:55.153 "nvmf_subsystem_add_listener", 00:19:55.153 "nvmf_delete_subsystem", 00:19:55.153 "nvmf_create_subsystem", 00:19:55.153 "nvmf_get_subsystems", 00:19:55.153 "env_dpdk_get_mem_stats", 00:19:55.153 "nbd_get_disks", 00:19:55.153 "nbd_stop_disk", 00:19:55.153 "nbd_start_disk", 00:19:55.153 "ublk_recover_disk", 00:19:55.153 "ublk_get_disks", 00:19:55.153 "ublk_stop_disk", 00:19:55.153 "ublk_start_disk", 00:19:55.153 "ublk_destroy_target", 00:19:55.153 "ublk_create_target", 00:19:55.153 "virtio_blk_create_transport", 00:19:55.153 "virtio_blk_get_transports", 00:19:55.153 "vhost_controller_set_coalescing", 00:19:55.153 "vhost_get_controllers", 00:19:55.153 "vhost_delete_controller", 00:19:55.153 "vhost_create_blk_controller", 00:19:55.153 "vhost_scsi_controller_remove_target", 00:19:55.153 "vhost_scsi_controller_add_target", 00:19:55.153 "vhost_start_scsi_controller", 00:19:55.153 "vhost_create_scsi_controller", 00:19:55.153 "thread_set_cpumask", 00:19:55.153 "scheduler_set_options", 00:19:55.153 "framework_get_governor", 00:19:55.153 "framework_get_scheduler", 00:19:55.153 "framework_set_scheduler", 00:19:55.153 "framework_get_reactors", 00:19:55.153 "thread_get_io_channels", 00:19:55.153 "thread_get_pollers", 00:19:55.153 "thread_get_stats", 00:19:55.153 "framework_monitor_context_switch", 00:19:55.153 "spdk_kill_instance", 00:19:55.153 "log_enable_timestamps", 00:19:55.153 "log_get_flags", 00:19:55.153 "log_clear_flag", 00:19:55.153 "log_set_flag", 00:19:55.153 "log_get_level", 00:19:55.153 "log_set_level", 00:19:55.153 "log_get_print_level", 00:19:55.153 "log_set_print_level", 00:19:55.153 "framework_enable_cpumask_locks", 00:19:55.153 "framework_disable_cpumask_locks", 00:19:55.153 "framework_wait_init", 00:19:55.153 "framework_start_init", 00:19:55.153 "scsi_get_devices", 00:19:55.153 "bdev_get_histogram", 00:19:55.153 "bdev_enable_histogram", 00:19:55.153 "bdev_set_qos_limit", 00:19:55.153 "bdev_set_qd_sampling_period", 00:19:55.153 "bdev_get_bdevs", 00:19:55.153 "bdev_reset_iostat", 00:19:55.153 "bdev_get_iostat", 00:19:55.153 "bdev_examine", 00:19:55.153 "bdev_wait_for_examine", 00:19:55.153 "bdev_set_options", 00:19:55.153 "accel_get_stats", 00:19:55.153 "accel_set_options", 00:19:55.153 "accel_set_driver", 00:19:55.153 "accel_crypto_key_destroy", 00:19:55.153 "accel_crypto_keys_get", 00:19:55.153 "accel_crypto_key_create", 00:19:55.153 "accel_assign_opc", 00:19:55.153 "accel_get_module_info", 00:19:55.153 "accel_get_opc_assignments", 00:19:55.153 "vmd_rescan", 00:19:55.153 "vmd_remove_device", 00:19:55.153 "vmd_enable", 00:19:55.153 "sock_get_default_impl", 00:19:55.153 "sock_set_default_impl", 00:19:55.153 "sock_impl_set_options", 00:19:55.153 "sock_impl_get_options", 00:19:55.153 "iobuf_get_stats", 00:19:55.153 "iobuf_set_options", 00:19:55.153 "keyring_get_keys", 00:19:55.153 "framework_get_pci_devices", 00:19:55.153 "framework_get_config", 00:19:55.153 "framework_get_subsystems", 00:19:55.153 "fsdev_set_opts", 00:19:55.153 "fsdev_get_opts", 00:19:55.153 "trace_get_info", 00:19:55.153 "trace_get_tpoint_group_mask", 00:19:55.153 "trace_disable_tpoint_group", 00:19:55.153 "trace_enable_tpoint_group", 00:19:55.153 "trace_clear_tpoint_mask", 00:19:55.153 "trace_set_tpoint_mask", 00:19:55.153 "notify_get_notifications", 00:19:55.153 "notify_get_types", 00:19:55.153 "spdk_get_version", 00:19:55.153 "rpc_get_methods" 00:19:55.153 ] 00:19:55.153 07:17:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:55.153 07:17:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:55.153 07:17:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59261 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59261 ']' 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59261 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59261 00:19:55.153 killing process with pid 59261 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59261' 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59261 00:19:55.153 07:17:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59261 00:19:58.438 ************************************ 00:19:58.438 END TEST spdkcli_tcp 00:19:58.438 ************************************ 00:19:58.438 00:19:58.438 real 0m4.648s 00:19:58.438 user 0m8.271s 00:19:58.438 sys 0m0.706s 00:19:58.438 07:17:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.438 07:17:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:58.438 07:17:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:19:58.438 07:17:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:58.438 07:17:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.438 07:17:21 -- common/autotest_common.sh@10 -- # set +x 00:19:58.438 ************************************ 00:19:58.438 START TEST dpdk_mem_utility 00:19:58.438 ************************************ 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:19:58.438 * Looking for test storage... 00:19:58.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:19:58.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.438 07:17:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:58.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.438 --rc genhtml_branch_coverage=1 00:19:58.438 --rc genhtml_function_coverage=1 00:19:58.438 --rc genhtml_legend=1 00:19:58.438 --rc geninfo_all_blocks=1 00:19:58.438 --rc geninfo_unexecuted_blocks=1 00:19:58.438 00:19:58.438 ' 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:58.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.438 --rc genhtml_branch_coverage=1 00:19:58.438 --rc genhtml_function_coverage=1 00:19:58.438 --rc genhtml_legend=1 00:19:58.438 --rc geninfo_all_blocks=1 00:19:58.438 --rc geninfo_unexecuted_blocks=1 00:19:58.438 00:19:58.438 ' 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:58.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.438 --rc genhtml_branch_coverage=1 00:19:58.438 --rc genhtml_function_coverage=1 00:19:58.438 --rc genhtml_legend=1 00:19:58.438 --rc geninfo_all_blocks=1 00:19:58.438 --rc geninfo_unexecuted_blocks=1 00:19:58.438 00:19:58.438 ' 00:19:58.438 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:58.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.439 --rc genhtml_branch_coverage=1 00:19:58.439 --rc genhtml_function_coverage=1 00:19:58.439 --rc genhtml_legend=1 00:19:58.439 --rc geninfo_all_blocks=1 00:19:58.439 --rc geninfo_unexecuted_blocks=1 00:19:58.439 00:19:58.439 ' 00:19:58.439 07:17:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:19:58.439 07:17:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59389 00:19:58.439 07:17:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59389 00:19:58.439 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59389 ']' 00:19:58.439 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.439 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.439 07:17:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.439 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.439 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.439 07:17:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:19:58.439 [2024-11-20 07:17:22.361678] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:19:58.439 [2024-11-20 07:17:22.362114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:19:58.439 [2024-11-20 07:17:22.567812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.697 [2024-11-20 07:17:22.707136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.634 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.634 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:19:59.634 07:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:19:59.634 07:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:19:59.634 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.634 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:19:59.634 { 00:19:59.634 "filename": "/tmp/spdk_mem_dump.txt" 00:19:59.634 } 00:19:59.634 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.634 07:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:19:59.634 DPDK memory size 816.000000 MiB in 1 heap(s) 00:19:59.634 1 heaps totaling size 816.000000 MiB 00:19:59.634 size: 816.000000 MiB heap id: 0 00:19:59.634 end heaps---------- 00:19:59.634 9 mempools totaling size 595.772034 MiB 00:19:59.634 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:19:59.634 size: 158.602051 MiB name: PDU_data_out_Pool 00:19:59.634 size: 92.545471 MiB name: bdev_io_59389 00:19:59.634 size: 50.003479 MiB name: msgpool_59389 00:19:59.634 size: 36.509338 MiB name: fsdev_io_59389 00:19:59.634 size: 21.763794 MiB name: PDU_Pool 00:19:59.634 size: 19.513306 MiB name: SCSI_TASK_Pool 00:19:59.634 size: 4.133484 MiB name: evtpool_59389 00:19:59.634 size: 0.026123 MiB name: Session_Pool 00:19:59.634 end mempools------- 00:19:59.634 6 memzones totaling size 4.142822 MiB 00:19:59.634 size: 1.000366 MiB name: RG_ring_0_59389 00:19:59.634 size: 1.000366 MiB name: RG_ring_1_59389 00:19:59.634 size: 1.000366 MiB name: RG_ring_4_59389 00:19:59.634 size: 1.000366 MiB name: RG_ring_5_59389 00:19:59.634 size: 0.125366 MiB name: RG_ring_2_59389 00:19:59.634 size: 0.015991 MiB name: RG_ring_3_59389 00:19:59.634 end memzones------- 00:19:59.634 07:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:19:59.896 heap id: 0 total size: 816.000000 MiB number of busy elements: 309 number of free elements: 18 00:19:59.896 list of free elements. size: 16.792847 MiB 00:19:59.896 element at address: 0x200006400000 with size: 1.995972 MiB 00:19:59.896 element at address: 0x20000a600000 with size: 1.995972 MiB 00:19:59.896 element at address: 0x200003e00000 with size: 1.991028 MiB 00:19:59.896 element at address: 0x200018d00040 with size: 0.999939 MiB 00:19:59.896 element at address: 0x200019100040 with size: 0.999939 MiB 00:19:59.896 element at address: 0x200019200000 with size: 0.999084 MiB 00:19:59.896 element at address: 0x200031e00000 with size: 0.994324 MiB 00:19:59.896 element at address: 0x200000400000 with size: 0.992004 MiB 00:19:59.896 element at address: 0x200018a00000 with size: 0.959656 MiB 00:19:59.896 element at address: 0x200019500040 with size: 0.936401 MiB 00:19:59.896 element at address: 0x200000200000 with size: 0.716980 MiB 00:19:59.896 element at address: 0x20001ac00000 with size: 0.563416 MiB 00:19:59.896 element at address: 0x200000c00000 with size: 0.490173 MiB 00:19:59.896 element at address: 0x200018e00000 with size: 0.487976 MiB 00:19:59.896 element at address: 0x200019600000 with size: 0.485413 MiB 00:19:59.896 element at address: 0x200012c00000 with size: 0.443237 MiB 00:19:59.896 element at address: 0x200028000000 with size: 0.390442 MiB 00:19:59.896 element at address: 0x200000800000 with size: 0.350891 MiB 00:19:59.896 list of standard malloc elements. size: 199.286255 MiB 00:19:59.896 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:19:59.896 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:19:59.896 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:19:59.896 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:19:59.896 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:19:59.896 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:19:59.896 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:19:59.896 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:19:59.896 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:19:59.896 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:19:59.896 element at address: 0x200012bff040 with size: 0.000305 MiB 00:19:59.896 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:19:59.896 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:19:59.896 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200000cff000 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff180 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff280 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff380 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff480 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff580 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff680 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff780 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff880 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bff980 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71780 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71880 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71980 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c72080 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012c72180 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:19:59.897 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200028063f40 with size: 0.000244 MiB 00:19:59.897 element at address: 0x200028064040 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806af80 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b080 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b180 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b280 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b380 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b480 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b580 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b680 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b780 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b880 with size: 0.000244 MiB 00:19:59.897 element at address: 0x20002806b980 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806be80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c080 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c180 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c280 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c380 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c480 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c580 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c680 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c780 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c880 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806c980 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d080 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d180 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d280 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d380 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d480 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d580 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d680 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d780 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d880 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806d980 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806da80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806db80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806de80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806df80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e080 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e180 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e280 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e380 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e480 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e580 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e680 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e780 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e880 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806e980 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f080 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f180 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f280 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f380 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f480 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f580 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f680 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f780 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f880 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806f980 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:19:59.898 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:19:59.898 list of memzone associated elements. size: 599.920898 MiB 00:19:59.898 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:19:59.898 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:19:59.898 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:19:59.898 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:19:59.898 element at address: 0x200012df4740 with size: 92.045105 MiB 00:19:59.898 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59389_0 00:19:59.898 element at address: 0x200000dff340 with size: 48.003113 MiB 00:19:59.898 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59389_0 00:19:59.898 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:19:59.898 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59389_0 00:19:59.898 element at address: 0x2000197be900 with size: 20.255615 MiB 00:19:59.898 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:19:59.898 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:19:59.898 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:19:59.898 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:19:59.898 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59389_0 00:19:59.898 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:19:59.898 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59389 00:19:59.898 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:19:59.898 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59389 00:19:59.898 element at address: 0x200018efde00 with size: 1.008179 MiB 00:19:59.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:19:59.898 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:19:59.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:19:59.898 element at address: 0x200018afde00 with size: 1.008179 MiB 00:19:59.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:19:59.898 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:19:59.898 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:19:59.898 element at address: 0x200000cff100 with size: 1.000549 MiB 00:19:59.898 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59389 00:19:59.898 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:19:59.898 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59389 00:19:59.898 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:19:59.898 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59389 00:19:59.898 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:19:59.898 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59389 00:19:59.898 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:19:59.898 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59389 00:19:59.898 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:19:59.898 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59389 00:19:59.898 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:19:59.898 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:19:59.898 element at address: 0x200012c72280 with size: 0.500549 MiB 00:19:59.898 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:19:59.898 element at address: 0x20001967c440 with size: 0.250549 MiB 00:19:59.898 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:19:59.898 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:19:59.898 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59389 00:19:59.898 element at address: 0x20000085df80 with size: 0.125549 MiB 00:19:59.898 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59389 00:19:59.898 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:19:59.898 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:19:59.898 element at address: 0x200028064140 with size: 0.023804 MiB 00:19:59.898 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:19:59.898 element at address: 0x200000859d40 with size: 0.016174 MiB 00:19:59.898 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59389 00:19:59.898 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:19:59.898 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:19:59.898 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:19:59.898 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59389 00:19:59.898 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:19:59.898 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59389 00:19:59.898 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:19:59.898 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59389 00:19:59.898 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:19:59.898 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:19:59.898 07:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:19:59.898 07:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59389 00:19:59.898 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59389 ']' 00:19:59.898 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59389 00:19:59.898 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:19:59.898 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.898 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59389 00:19:59.898 killing process with pid 59389 00:19:59.898 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.899 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.899 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59389' 00:19:59.899 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59389 00:19:59.899 07:17:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59389 00:20:02.431 ************************************ 00:20:02.431 END TEST dpdk_mem_utility 00:20:02.431 ************************************ 00:20:02.431 00:20:02.431 real 0m4.573s 00:20:02.431 user 0m4.629s 00:20:02.431 sys 0m0.649s 00:20:02.431 07:17:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.431 07:17:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:20:02.431 07:17:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:20:02.431 07:17:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:02.431 07:17:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.431 07:17:26 -- common/autotest_common.sh@10 -- # set +x 00:20:02.431 ************************************ 00:20:02.431 START TEST event 00:20:02.431 ************************************ 00:20:02.431 07:17:26 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:20:02.689 * Looking for test storage... 00:20:02.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1693 -- # lcov --version 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:02.690 07:17:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:02.690 07:17:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:02.690 07:17:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:02.690 07:17:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:20:02.690 07:17:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:20:02.690 07:17:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:20:02.690 07:17:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:20:02.690 07:17:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:20:02.690 07:17:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:20:02.690 07:17:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:20:02.690 07:17:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:02.690 07:17:26 event -- scripts/common.sh@344 -- # case "$op" in 00:20:02.690 07:17:26 event -- scripts/common.sh@345 -- # : 1 00:20:02.690 07:17:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:02.690 07:17:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:02.690 07:17:26 event -- scripts/common.sh@365 -- # decimal 1 00:20:02.690 07:17:26 event -- scripts/common.sh@353 -- # local d=1 00:20:02.690 07:17:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:02.690 07:17:26 event -- scripts/common.sh@355 -- # echo 1 00:20:02.690 07:17:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:20:02.690 07:17:26 event -- scripts/common.sh@366 -- # decimal 2 00:20:02.690 07:17:26 event -- scripts/common.sh@353 -- # local d=2 00:20:02.690 07:17:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:02.690 07:17:26 event -- scripts/common.sh@355 -- # echo 2 00:20:02.690 07:17:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:20:02.690 07:17:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:02.690 07:17:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:02.690 07:17:26 event -- scripts/common.sh@368 -- # return 0 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.690 --rc genhtml_branch_coverage=1 00:20:02.690 --rc genhtml_function_coverage=1 00:20:02.690 --rc genhtml_legend=1 00:20:02.690 --rc geninfo_all_blocks=1 00:20:02.690 --rc geninfo_unexecuted_blocks=1 00:20:02.690 00:20:02.690 ' 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.690 --rc genhtml_branch_coverage=1 00:20:02.690 --rc genhtml_function_coverage=1 00:20:02.690 --rc genhtml_legend=1 00:20:02.690 --rc geninfo_all_blocks=1 00:20:02.690 --rc geninfo_unexecuted_blocks=1 00:20:02.690 00:20:02.690 ' 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.690 --rc genhtml_branch_coverage=1 00:20:02.690 --rc genhtml_function_coverage=1 00:20:02.690 --rc genhtml_legend=1 00:20:02.690 --rc geninfo_all_blocks=1 00:20:02.690 --rc geninfo_unexecuted_blocks=1 00:20:02.690 00:20:02.690 ' 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:02.690 --rc genhtml_branch_coverage=1 00:20:02.690 --rc genhtml_function_coverage=1 00:20:02.690 --rc genhtml_legend=1 00:20:02.690 --rc geninfo_all_blocks=1 00:20:02.690 --rc geninfo_unexecuted_blocks=1 00:20:02.690 00:20:02.690 ' 00:20:02.690 07:17:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:02.690 07:17:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:20:02.690 07:17:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:20:02.690 07:17:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.690 07:17:26 event -- common/autotest_common.sh@10 -- # set +x 00:20:02.690 ************************************ 00:20:02.690 START TEST event_perf 00:20:02.690 ************************************ 00:20:02.690 07:17:26 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:20:02.948 Running I/O for 1 seconds...[2024-11-20 07:17:26.898691] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:02.948 [2024-11-20 07:17:26.899048] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59502 ] 00:20:02.948 [2024-11-20 07:17:27.090201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.206 [2024-11-20 07:17:27.272441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.206 [2024-11-20 07:17:27.272615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.206 Running I/O for 1 seconds...[2024-11-20 07:17:27.272776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.206 [2024-11-20 07:17:27.272798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.582 00:20:04.582 lcore 0: 182939 00:20:04.582 lcore 1: 182941 00:20:04.582 lcore 2: 182939 00:20:04.582 lcore 3: 182940 00:20:04.582 done. 00:20:04.582 ************************************ 00:20:04.582 END TEST event_perf 00:20:04.582 ************************************ 00:20:04.582 00:20:04.582 real 0m1.660s 00:20:04.582 user 0m4.411s 00:20:04.582 sys 0m0.124s 00:20:04.582 07:17:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.582 07:17:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:20:04.582 07:17:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:20:04.582 07:17:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:04.582 07:17:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.582 07:17:28 event -- common/autotest_common.sh@10 -- # set +x 00:20:04.582 ************************************ 00:20:04.582 START TEST event_reactor 00:20:04.582 ************************************ 00:20:04.582 07:17:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:20:04.582 [2024-11-20 07:17:28.617151] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:04.582 [2024-11-20 07:17:28.617463] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59544 ] 00:20:04.841 [2024-11-20 07:17:28.787872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.841 [2024-11-20 07:17:28.916246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.232 test_start 00:20:06.232 oneshot 00:20:06.232 tick 100 00:20:06.232 tick 100 00:20:06.232 tick 250 00:20:06.232 tick 100 00:20:06.232 tick 100 00:20:06.232 tick 100 00:20:06.232 tick 250 00:20:06.232 tick 500 00:20:06.232 tick 100 00:20:06.232 tick 100 00:20:06.232 tick 250 00:20:06.232 tick 100 00:20:06.232 tick 100 00:20:06.232 test_end 00:20:06.232 00:20:06.232 real 0m1.578s 00:20:06.232 user 0m1.373s 00:20:06.232 sys 0m0.097s 00:20:06.232 07:17:30 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.232 ************************************ 00:20:06.232 END TEST event_reactor 00:20:06.232 ************************************ 00:20:06.232 07:17:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:20:06.232 07:17:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:20:06.232 07:17:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:06.232 07:17:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.232 07:17:30 event -- common/autotest_common.sh@10 -- # set +x 00:20:06.232 ************************************ 00:20:06.232 START TEST event_reactor_perf 00:20:06.232 ************************************ 00:20:06.232 07:17:30 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:20:06.232 [2024-11-20 07:17:30.268396] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:06.233 [2024-11-20 07:17:30.268895] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59587 ] 00:20:06.490 [2024-11-20 07:17:30.473168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.490 [2024-11-20 07:17:30.640918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.864 test_start 00:20:07.864 test_end 00:20:07.864 Performance: 348746 events per second 00:20:07.864 00:20:07.864 real 0m1.663s 00:20:07.864 user 0m1.433s 00:20:07.864 sys 0m0.120s 00:20:07.864 07:17:31 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.864 ************************************ 00:20:07.864 END TEST event_reactor_perf 00:20:07.864 07:17:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:20:07.864 ************************************ 00:20:07.864 07:17:31 event -- event/event.sh@49 -- # uname -s 00:20:07.864 07:17:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:20:07.864 07:17:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:20:07.864 07:17:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.864 07:17:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.864 07:17:31 event -- common/autotest_common.sh@10 -- # set +x 00:20:07.864 ************************************ 00:20:07.864 START TEST event_scheduler 00:20:07.864 ************************************ 00:20:07.864 07:17:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:20:07.864 * Looking for test storage... 00:20:07.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:20:07.864 07:17:32 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:07.864 07:17:32 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:20:07.864 07:17:32 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:20:08.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:08.123 07:17:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.123 --rc genhtml_branch_coverage=1 00:20:08.123 --rc genhtml_function_coverage=1 00:20:08.123 --rc genhtml_legend=1 00:20:08.123 --rc geninfo_all_blocks=1 00:20:08.123 --rc geninfo_unexecuted_blocks=1 00:20:08.123 00:20:08.123 ' 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.123 --rc genhtml_branch_coverage=1 00:20:08.123 --rc genhtml_function_coverage=1 00:20:08.123 --rc genhtml_legend=1 00:20:08.123 --rc geninfo_all_blocks=1 00:20:08.123 --rc geninfo_unexecuted_blocks=1 00:20:08.123 00:20:08.123 ' 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.123 --rc genhtml_branch_coverage=1 00:20:08.123 --rc genhtml_function_coverage=1 00:20:08.123 --rc genhtml_legend=1 00:20:08.123 --rc geninfo_all_blocks=1 00:20:08.123 --rc geninfo_unexecuted_blocks=1 00:20:08.123 00:20:08.123 ' 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:08.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.123 --rc genhtml_branch_coverage=1 00:20:08.123 --rc genhtml_function_coverage=1 00:20:08.123 --rc genhtml_legend=1 00:20:08.123 --rc geninfo_all_blocks=1 00:20:08.123 --rc geninfo_unexecuted_blocks=1 00:20:08.123 00:20:08.123 ' 00:20:08.123 07:17:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:20:08.123 07:17:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59663 00:20:08.123 07:17:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:20:08.123 07:17:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59663 00:20:08.123 07:17:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59663 ']' 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.123 07:17:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:08.123 [2024-11-20 07:17:32.273020] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:08.123 [2024-11-20 07:17:32.273653] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:20:08.382 [2024-11-20 07:17:32.484089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.640 [2024-11-20 07:17:32.665211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.640 [2024-11-20 07:17:32.665368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.640 [2024-11-20 07:17:32.665442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.640 [2024-11-20 07:17:32.665462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.206 07:17:33 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.206 07:17:33 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:20:09.206 07:17:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:20:09.206 07:17:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.206 07:17:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:09.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:09.206 POWER: Cannot set governor of lcore 0 to userspace 00:20:09.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:09.206 POWER: Cannot set governor of lcore 0 to performance 00:20:09.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:09.206 POWER: Cannot set governor of lcore 0 to userspace 00:20:09.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:09.206 POWER: Cannot set governor of lcore 0 to userspace 00:20:09.206 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:20:09.206 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:20:09.206 POWER: Unable to set Power Management Environment for lcore 0 00:20:09.206 [2024-11-20 07:17:33.308099] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:20:09.206 [2024-11-20 07:17:33.308130] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:20:09.206 [2024-11-20 07:17:33.308146] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:20:09.206 [2024-11-20 07:17:33.308171] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:20:09.206 [2024-11-20 07:17:33.308184] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:20:09.206 [2024-11-20 07:17:33.308199] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:20:09.206 07:17:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.206 07:17:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:20:09.206 07:17:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.206 07:17:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:09.485 [2024-11-20 07:17:33.664872] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:20:09.485 07:17:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.485 07:17:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:20:09.485 07:17:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:09.485 07:17:33 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.485 07:17:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 ************************************ 00:20:09.743 START TEST scheduler_create_thread 00:20:09.743 ************************************ 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 2 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 3 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 4 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 5 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 6 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 7 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 8 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 9 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 10 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.743 07:17:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:11.121 07:17:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.121 07:17:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:20:11.121 07:17:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:20:11.121 07:17:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.121 07:17:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:12.519 ************************************ 00:20:12.519 END TEST scheduler_create_thread 00:20:12.519 ************************************ 00:20:12.519 07:17:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.519 00:20:12.519 real 0m2.622s 00:20:12.519 user 0m0.021s 00:20:12.519 sys 0m0.010s 00:20:12.519 07:17:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.519 07:17:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:20:12.519 07:17:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:20:12.519 07:17:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59663 00:20:12.519 07:17:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59663 ']' 00:20:12.519 07:17:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59663 00:20:12.519 07:17:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:20:12.519 07:17:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.519 07:17:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59663 00:20:12.519 killing process with pid 59663 00:20:12.519 07:17:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:12.520 07:17:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:12.520 07:17:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59663' 00:20:12.520 07:17:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59663 00:20:12.520 07:17:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59663 00:20:12.794 [2024-11-20 07:17:36.780651] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:20:14.205 ************************************ 00:20:14.205 END TEST event_scheduler 00:20:14.205 ************************************ 00:20:14.205 00:20:14.205 real 0m6.062s 00:20:14.205 user 0m10.571s 00:20:14.205 sys 0m0.584s 00:20:14.205 07:17:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.205 07:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:20:14.205 07:17:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:20:14.205 07:17:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:20:14.205 07:17:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.205 07:17:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.205 07:17:38 event -- common/autotest_common.sh@10 -- # set +x 00:20:14.205 ************************************ 00:20:14.205 START TEST app_repeat 00:20:14.205 ************************************ 00:20:14.205 07:17:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:20:14.205 Process app_repeat pid: 59769 00:20:14.205 spdk_app_start Round 0 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59769 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59769' 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:20:14.205 07:17:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59769 /var/tmp/spdk-nbd.sock 00:20:14.205 07:17:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59769 ']' 00:20:14.205 07:17:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:14.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:14.205 07:17:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.205 07:17:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:14.205 07:17:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.205 07:17:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:14.205 [2024-11-20 07:17:38.146587] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:14.205 [2024-11-20 07:17:38.146980] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59769 ] 00:20:14.205 [2024-11-20 07:17:38.367264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:14.464 [2024-11-20 07:17:38.533150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.464 [2024-11-20 07:17:38.533157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.030 07:17:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.030 07:17:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:20:15.031 07:17:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:15.289 Malloc0 00:20:15.289 07:17:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:15.856 Malloc1 00:20:15.856 07:17:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:15.856 07:17:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:16.115 /dev/nbd0 00:20:16.115 07:17:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:16.115 07:17:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:16.115 1+0 records in 00:20:16.115 1+0 records out 00:20:16.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000841088 s, 4.9 MB/s 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:16.115 07:17:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:20:16.115 07:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:16.115 07:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:16.115 07:17:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:16.373 /dev/nbd1 00:20:16.373 07:17:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:16.373 07:17:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:16.373 1+0 records in 00:20:16.373 1+0 records out 00:20:16.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382733 s, 10.7 MB/s 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:16.373 07:17:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:20:16.373 07:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:16.373 07:17:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:16.373 07:17:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:16.373 07:17:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:16.373 07:17:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:16.634 07:17:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:16.634 { 00:20:16.634 "nbd_device": "/dev/nbd0", 00:20:16.634 "bdev_name": "Malloc0" 00:20:16.634 }, 00:20:16.634 { 00:20:16.634 "nbd_device": "/dev/nbd1", 00:20:16.634 "bdev_name": "Malloc1" 00:20:16.634 } 00:20:16.634 ]' 00:20:16.634 07:17:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:16.634 { 00:20:16.634 "nbd_device": "/dev/nbd0", 00:20:16.634 "bdev_name": "Malloc0" 00:20:16.634 }, 00:20:16.634 { 00:20:16.634 "nbd_device": "/dev/nbd1", 00:20:16.634 "bdev_name": "Malloc1" 00:20:16.634 } 00:20:16.634 ]' 00:20:16.634 07:17:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:16.918 /dev/nbd1' 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:16.918 /dev/nbd1' 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:16.918 256+0 records in 00:20:16.918 256+0 records out 00:20:16.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101166 s, 104 MB/s 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:16.918 256+0 records in 00:20:16.918 256+0 records out 00:20:16.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322954 s, 32.5 MB/s 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:16.918 256+0 records in 00:20:16.918 256+0 records out 00:20:16.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0385783 s, 27.2 MB/s 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.918 07:17:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:17.179 07:17:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:17.815 07:17:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:17.815 07:17:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:20:17.815 07:17:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:18.380 07:17:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:20:19.770 [2024-11-20 07:17:43.722526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:19.770 [2024-11-20 07:17:43.844902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.770 [2024-11-20 07:17:43.844903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.037 [2024-11-20 07:17:44.056789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:20.037 [2024-11-20 07:17:44.056886] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:21.443 spdk_app_start Round 1 00:20:21.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:21.443 07:17:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:20:21.443 07:17:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:20:21.443 07:17:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59769 /var/tmp/spdk-nbd.sock 00:20:21.443 07:17:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59769 ']' 00:20:21.443 07:17:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:21.443 07:17:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.443 07:17:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:21.444 07:17:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.444 07:17:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:21.702 07:17:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.702 07:17:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:20:21.702 07:17:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:21.960 Malloc0 00:20:22.218 07:17:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:22.477 Malloc1 00:20:22.477 07:17:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.477 07:17:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:22.736 /dev/nbd0 00:20:22.736 07:17:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:22.736 07:17:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:22.736 1+0 records in 00:20:22.736 1+0 records out 00:20:22.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357595 s, 11.5 MB/s 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.736 07:17:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:20:22.736 07:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:22.736 07:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.736 07:17:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:22.994 /dev/nbd1 00:20:23.269 07:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:23.269 07:17:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:23.269 1+0 records in 00:20:23.269 1+0 records out 00:20:23.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353653 s, 11.6 MB/s 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.269 07:17:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:20:23.269 07:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.269 07:17:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.269 07:17:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:23.269 07:17:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:23.269 07:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:23.527 { 00:20:23.527 "nbd_device": "/dev/nbd0", 00:20:23.527 "bdev_name": "Malloc0" 00:20:23.527 }, 00:20:23.527 { 00:20:23.527 "nbd_device": "/dev/nbd1", 00:20:23.527 "bdev_name": "Malloc1" 00:20:23.527 } 00:20:23.527 ]' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:23.527 { 00:20:23.527 "nbd_device": "/dev/nbd0", 00:20:23.527 "bdev_name": "Malloc0" 00:20:23.527 }, 00:20:23.527 { 00:20:23.527 "nbd_device": "/dev/nbd1", 00:20:23.527 "bdev_name": "Malloc1" 00:20:23.527 } 00:20:23.527 ]' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:23.527 /dev/nbd1' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:23.527 /dev/nbd1' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:23.527 256+0 records in 00:20:23.527 256+0 records out 00:20:23.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684004 s, 153 MB/s 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:23.527 256+0 records in 00:20:23.527 256+0 records out 00:20:23.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306231 s, 34.2 MB/s 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:23.527 256+0 records in 00:20:23.527 256+0 records out 00:20:23.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0424366 s, 24.7 MB/s 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.527 07:17:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:24.094 07:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.094 07:17:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:24.353 07:17:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:24.611 07:17:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:20:24.611 07:17:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:25.175 07:17:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:20:26.638 [2024-11-20 07:17:50.569870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:26.638 [2024-11-20 07:17:50.695188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.638 [2024-11-20 07:17:50.695207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.912 [2024-11-20 07:17:50.916276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:26.912 [2024-11-20 07:17:50.916401] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:28.291 spdk_app_start Round 2 00:20:28.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:28.291 07:17:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:20:28.291 07:17:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:20:28.291 07:17:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59769 /var/tmp/spdk-nbd.sock 00:20:28.291 07:17:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59769 ']' 00:20:28.291 07:17:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:28.291 07:17:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.291 07:17:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:28.291 07:17:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.291 07:17:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:28.550 07:17:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.550 07:17:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:20:28.550 07:17:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:28.807 Malloc0 00:20:28.807 07:17:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:20:29.374 Malloc1 00:20:29.374 07:17:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.374 07:17:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:20:29.632 /dev/nbd0 00:20:29.632 07:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:29.632 07:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:29.632 1+0 records in 00:20:29.632 1+0 records out 00:20:29.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291811 s, 14.0 MB/s 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:29.632 07:17:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:20:29.632 07:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.632 07:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.632 07:17:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:20:29.891 /dev/nbd1 00:20:29.891 07:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:29.891 07:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:20:29.891 1+0 records in 00:20:29.891 1+0 records out 00:20:29.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363351 s, 11.3 MB/s 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:20:29.891 07:17:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:20:29.891 07:17:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:29.891 07:17:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:20:29.891 07:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:29.891 07:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:29.891 07:17:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:29.891 07:17:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.891 07:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:30.149 07:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:30.149 { 00:20:30.149 "nbd_device": "/dev/nbd0", 00:20:30.149 "bdev_name": "Malloc0" 00:20:30.149 }, 00:20:30.149 { 00:20:30.149 "nbd_device": "/dev/nbd1", 00:20:30.149 "bdev_name": "Malloc1" 00:20:30.149 } 00:20:30.149 ]' 00:20:30.149 07:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:30.149 07:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:30.149 { 00:20:30.149 "nbd_device": "/dev/nbd0", 00:20:30.149 "bdev_name": "Malloc0" 00:20:30.149 }, 00:20:30.149 { 00:20:30.149 "nbd_device": "/dev/nbd1", 00:20:30.149 "bdev_name": "Malloc1" 00:20:30.149 } 00:20:30.149 ]' 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:30.407 /dev/nbd1' 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:30.407 /dev/nbd1' 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:20:30.407 256+0 records in 00:20:30.407 256+0 records out 00:20:30.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00824379 s, 127 MB/s 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:30.407 256+0 records in 00:20:30.407 256+0 records out 00:20:30.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337216 s, 31.1 MB/s 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:30.407 256+0 records in 00:20:30.407 256+0 records out 00:20:30.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0383199 s, 27.4 MB/s 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.407 07:17:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.973 07:17:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:31.231 07:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:31.231 07:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:31.231 07:17:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:31.232 07:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.232 07:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.232 07:17:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:31.232 07:17:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:20:31.232 07:17:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.232 07:17:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:31.232 07:17:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.232 07:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:31.546 07:17:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:20:31.546 07:17:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:20:32.155 07:17:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:20:33.530 [2024-11-20 07:17:57.483759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:33.530 [2024-11-20 07:17:57.603344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.530 [2024-11-20 07:17:57.603353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.788 [2024-11-20 07:17:57.824303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:20:33.788 [2024-11-20 07:17:57.824408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:20:35.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:35.233 07:17:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59769 /var/tmp/spdk-nbd.sock 00:20:35.233 07:17:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59769 ']' 00:20:35.233 07:17:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:35.233 07:17:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.233 07:17:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:35.233 07:17:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.233 07:17:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:20:35.492 07:17:59 event.app_repeat -- event/event.sh@39 -- # killprocess 59769 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59769 ']' 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59769 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59769 00:20:35.492 killing process with pid 59769 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59769' 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59769 00:20:35.492 07:17:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59769 00:20:36.870 spdk_app_start is called in Round 0. 00:20:36.870 Shutdown signal received, stop current app iteration 00:20:36.870 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:20:36.870 spdk_app_start is called in Round 1. 00:20:36.870 Shutdown signal received, stop current app iteration 00:20:36.870 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:20:36.870 spdk_app_start is called in Round 2. 00:20:36.870 Shutdown signal received, stop current app iteration 00:20:36.870 Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 reinitialization... 00:20:36.870 spdk_app_start is called in Round 3. 00:20:36.870 Shutdown signal received, stop current app iteration 00:20:36.870 07:18:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:20:36.870 ************************************ 00:20:36.870 END TEST app_repeat 00:20:36.870 ************************************ 00:20:36.870 07:18:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:20:36.870 00:20:36.870 real 0m22.746s 00:20:36.870 user 0m49.688s 00:20:36.870 sys 0m4.021s 00:20:36.870 07:18:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.870 07:18:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:20:36.870 07:18:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:20:36.870 07:18:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:20:36.870 07:18:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.870 07:18:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.870 07:18:00 event -- common/autotest_common.sh@10 -- # set +x 00:20:36.870 ************************************ 00:20:36.870 START TEST cpu_locks 00:20:36.870 ************************************ 00:20:36.870 07:18:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:20:36.870 * Looking for test storage... 00:20:36.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:20:36.870 07:18:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:36.870 07:18:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:20:36.870 07:18:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:36.870 07:18:01 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:36.870 07:18:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.870 07:18:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.870 07:18:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.870 07:18:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.870 07:18:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.870 07:18:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.870 07:18:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.871 07:18:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:20:36.871 07:18:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.871 07:18:01 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:36.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.871 --rc genhtml_branch_coverage=1 00:20:36.871 --rc genhtml_function_coverage=1 00:20:36.871 --rc genhtml_legend=1 00:20:36.871 --rc geninfo_all_blocks=1 00:20:36.871 --rc geninfo_unexecuted_blocks=1 00:20:36.871 00:20:36.871 ' 00:20:36.871 07:18:01 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:36.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.871 --rc genhtml_branch_coverage=1 00:20:36.871 --rc genhtml_function_coverage=1 00:20:36.871 --rc genhtml_legend=1 00:20:36.871 --rc geninfo_all_blocks=1 00:20:36.871 --rc geninfo_unexecuted_blocks=1 00:20:36.871 00:20:36.871 ' 00:20:36.871 07:18:01 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:36.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.871 --rc genhtml_branch_coverage=1 00:20:36.871 --rc genhtml_function_coverage=1 00:20:36.871 --rc genhtml_legend=1 00:20:36.871 --rc geninfo_all_blocks=1 00:20:36.871 --rc geninfo_unexecuted_blocks=1 00:20:36.871 00:20:36.871 ' 00:20:36.871 07:18:01 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:36.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.871 --rc genhtml_branch_coverage=1 00:20:36.871 --rc genhtml_function_coverage=1 00:20:36.871 --rc genhtml_legend=1 00:20:36.871 --rc geninfo_all_blocks=1 00:20:36.871 --rc geninfo_unexecuted_blocks=1 00:20:36.871 00:20:36.871 ' 00:20:36.871 07:18:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:20:36.871 07:18:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:20:36.871 07:18:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:20:36.871 07:18:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:20:36.871 07:18:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.871 07:18:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.871 07:18:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:36.871 ************************************ 00:20:36.871 START TEST default_locks 00:20:36.871 ************************************ 00:20:36.871 07:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60257 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:37.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60257 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60257 ']' 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.130 07:18:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:37.130 [2024-11-20 07:18:01.192749] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:37.130 [2024-11-20 07:18:01.193124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 00:20:37.388 [2024-11-20 07:18:01.373048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.388 [2024-11-20 07:18:01.499635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.322 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.322 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:20:38.322 07:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60257 00:20:38.322 07:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60257 00:20:38.322 07:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60257 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60257 ']' 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60257 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60257 00:20:38.889 killing process with pid 60257 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60257' 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60257 00:20:38.889 07:18:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60257 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60257 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60257 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60257 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60257 ']' 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.169 ERROR: process (pid: 60257) is no longer running 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:42.169 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60257) - No such process 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.169 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:20:42.170 ************************************ 00:20:42.170 END TEST default_locks 00:20:42.170 ************************************ 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:20:42.170 00:20:42.170 real 0m4.628s 00:20:42.170 user 0m4.711s 00:20:42.170 sys 0m0.777s 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.170 07:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:20:42.170 07:18:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:20:42.170 07:18:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:42.170 07:18:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.170 07:18:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:42.170 ************************************ 00:20:42.170 START TEST default_locks_via_rpc 00:20:42.170 ************************************ 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:20:42.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60337 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60337 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60337 ']' 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.170 07:18:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.170 [2024-11-20 07:18:05.908559] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:42.170 [2024-11-20 07:18:05.909636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60337 ] 00:20:42.170 [2024-11-20 07:18:06.117298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.170 [2024-11-20 07:18:06.288504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60337 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:43.122 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60337 00:20:43.687 07:18:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60337 00:20:43.687 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60337 ']' 00:20:43.687 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60337 00:20:43.687 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:20:43.687 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.687 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60337 00:20:43.945 killing process with pid 60337 00:20:43.945 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.945 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.945 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60337' 00:20:43.945 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60337 00:20:43.945 07:18:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60337 00:20:46.506 00:20:46.506 real 0m4.853s 00:20:46.506 user 0m4.890s 00:20:46.506 sys 0m0.796s 00:20:46.506 07:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.506 ************************************ 00:20:46.506 END TEST default_locks_via_rpc 00:20:46.506 07:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:46.506 ************************************ 00:20:46.506 07:18:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:20:46.506 07:18:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.506 07:18:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.506 07:18:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:46.506 ************************************ 00:20:46.506 START TEST non_locking_app_on_locked_coremask 00:20:46.506 ************************************ 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:20:46.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60417 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60417 /var/tmp/spdk.sock 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60417 ']' 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.506 07:18:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:46.764 [2024-11-20 07:18:10.824229] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:46.764 [2024-11-20 07:18:10.824610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60417 ] 00:20:47.022 [2024-11-20 07:18:11.017523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.022 [2024-11-20 07:18:11.146958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60444 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:20:47.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60444 /var/tmp/spdk2.sock 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60444 ']' 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.955 07:18:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:20:48.214 [2024-11-20 07:18:12.210334] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:20:48.214 [2024-11-20 07:18:12.210694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60444 ] 00:20:48.214 [2024-11-20 07:18:12.410260] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:20:48.214 [2024-11-20 07:18:12.410337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.471 [2024-11-20 07:18:12.672469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.003 07:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.003 07:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:20:51.003 07:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60417 00:20:51.003 07:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60417 00:20:51.003 07:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:20:51.937 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60417 00:20:51.937 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60417 ']' 00:20:51.937 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60417 00:20:51.937 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:20:51.937 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.937 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60417 00:20:52.259 killing process with pid 60417 00:20:52.259 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.259 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.259 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60417' 00:20:52.259 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60417 00:20:52.259 07:18:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60417 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60444 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60444 ']' 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60444 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60444 00:20:58.821 killing process with pid 60444 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60444' 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60444 00:20:58.821 07:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60444 00:21:00.721 00:21:00.721 real 0m14.170s 00:21:00.721 user 0m14.825s 00:21:00.721 sys 0m1.676s 00:21:00.721 07:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.721 07:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:00.721 ************************************ 00:21:00.721 END TEST non_locking_app_on_locked_coremask 00:21:00.721 ************************************ 00:21:00.721 07:18:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:21:00.721 07:18:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.721 07:18:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.721 07:18:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:00.721 ************************************ 00:21:00.721 START TEST locking_app_on_unlocked_coremask 00:21:00.721 ************************************ 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60614 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60614 /var/tmp/spdk.sock 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60614 ']' 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.721 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.722 07:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:00.979 [2024-11-20 07:18:25.019333] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:00.979 [2024-11-20 07:18:25.019773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60614 ] 00:21:01.237 [2024-11-20 07:18:25.205494] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:01.237 [2024-11-20 07:18:25.205793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.237 [2024-11-20 07:18:25.386632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60635 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60635 /var/tmp/spdk2.sock 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60635 ']' 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:02.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.615 07:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:02.615 [2024-11-20 07:18:26.561133] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:02.615 [2024-11-20 07:18:26.561712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60635 ] 00:21:02.615 [2024-11-20 07:18:26.777328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.873 [2024-11-20 07:18:27.035122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.404 07:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.404 07:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:05.404 07:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60635 00:21:05.404 07:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60635 00:21:05.404 07:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60614 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60614 ']' 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60614 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60614 00:21:06.341 killing process with pid 60614 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60614' 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60614 00:21:06.341 07:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60614 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60635 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60635 ']' 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60635 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60635 00:21:11.611 killing process with pid 60635 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60635' 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60635 00:21:11.611 07:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60635 00:21:14.893 ************************************ 00:21:14.893 END TEST locking_app_on_unlocked_coremask 00:21:14.893 ************************************ 00:21:14.893 00:21:14.893 real 0m13.627s 00:21:14.893 user 0m14.299s 00:21:14.893 sys 0m1.595s 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:14.893 07:18:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:21:14.893 07:18:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:14.893 07:18:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.893 07:18:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:14.893 ************************************ 00:21:14.893 START TEST locking_app_on_locked_coremask 00:21:14.893 ************************************ 00:21:14.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60800 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60800 /var/tmp/spdk.sock 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60800 ']' 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.893 07:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:14.893 [2024-11-20 07:18:38.716733] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:14.893 [2024-11-20 07:18:38.717194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60800 ] 00:21:14.893 [2024-11-20 07:18:38.920773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.152 [2024-11-20 07:18:39.105665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60822 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60822 /var/tmp/spdk2.sock 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60822 /var/tmp/spdk2.sock 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60822 /var/tmp/spdk2.sock 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60822 ']' 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:16.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.190 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:16.190 [2024-11-20 07:18:40.293951] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:16.190 [2024-11-20 07:18:40.294124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60822 ] 00:21:16.469 [2024-11-20 07:18:40.501210] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60800 has claimed it. 00:21:16.469 [2024-11-20 07:18:40.501348] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:21:17.036 ERROR: process (pid: 60822) is no longer running 00:21:17.036 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60822) - No such process 00:21:17.036 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.036 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:21:17.036 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:21:17.036 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.036 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.036 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.036 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60800 00:21:17.036 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:21:17.037 07:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60800 00:21:17.295 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60800 00:21:17.295 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60800 ']' 00:21:17.295 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60800 00:21:17.295 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:21:17.295 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.295 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60800 00:21:17.553 killing process with pid 60800 00:21:17.553 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.553 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.553 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60800' 00:21:17.553 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60800 00:21:17.553 07:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60800 00:21:20.088 00:21:20.088 real 0m5.670s 00:21:20.088 user 0m6.079s 00:21:20.088 sys 0m1.051s 00:21:20.088 07:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.088 ************************************ 00:21:20.088 07:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:20.088 END TEST locking_app_on_locked_coremask 00:21:20.088 ************************************ 00:21:20.347 07:18:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:21:20.347 07:18:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:20.347 07:18:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.347 07:18:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:20.347 ************************************ 00:21:20.347 START TEST locking_overlapped_coremask 00:21:20.347 ************************************ 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60897 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60897 /var/tmp/spdk.sock 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60897 ']' 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.347 07:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:20.347 [2024-11-20 07:18:44.447855] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:20.347 [2024-11-20 07:18:44.448267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60897 ] 00:21:20.605 [2024-11-20 07:18:44.641683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:20.605 [2024-11-20 07:18:44.778010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.605 [2024-11-20 07:18:44.778062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.605 [2024-11-20 07:18:44.778071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60920 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60920 /var/tmp/spdk2.sock 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60920 /var/tmp/spdk2.sock 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60920 /var/tmp/spdk2.sock 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60920 ']' 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.983 07:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:21.983 [2024-11-20 07:18:45.906828] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:21.983 [2024-11-20 07:18:45.907482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60920 ] 00:21:21.983 [2024-11-20 07:18:46.120995] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60897 has claimed it. 00:21:21.983 [2024-11-20 07:18:46.121088] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:21:22.550 ERROR: process (pid: 60920) is no longer running 00:21:22.550 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60920) - No such process 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60897 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60897 ']' 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60897 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60897 00:21:22.550 killing process with pid 60897 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60897' 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60897 00:21:22.550 07:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60897 00:21:25.838 ************************************ 00:21:25.838 END TEST locking_overlapped_coremask 00:21:25.838 ************************************ 00:21:25.838 00:21:25.838 real 0m5.121s 00:21:25.838 user 0m14.036s 00:21:25.838 sys 0m0.720s 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:21:25.838 07:18:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:21:25.838 07:18:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.838 07:18:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.838 07:18:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:25.838 ************************************ 00:21:25.838 START TEST locking_overlapped_coremask_via_rpc 00:21:25.838 ************************************ 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60990 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60990 /var/tmp/spdk.sock 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60990 ']' 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.838 07:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:25.838 [2024-11-20 07:18:49.642427] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:25.838 [2024-11-20 07:18:49.642610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60990 ] 00:21:25.838 [2024-11-20 07:18:49.838870] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:25.838 [2024-11-20 07:18:49.838972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:25.838 [2024-11-20 07:18:49.986714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.838 [2024-11-20 07:18:49.986857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.838 [2024-11-20 07:18:49.986896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.215 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.215 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:27.215 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61019 00:21:27.215 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61019 /var/tmp/spdk2.sock 00:21:27.215 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61019 ']' 00:21:27.215 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:21:27.215 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:27.215 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.216 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:27.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:27.216 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.216 07:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:27.216 [2024-11-20 07:18:51.224106] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:27.216 [2024-11-20 07:18:51.224289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61019 ] 00:21:27.475 [2024-11-20 07:18:51.438716] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:21:27.475 [2024-11-20 07:18:51.438832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.733 [2024-11-20 07:18:51.791340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.733 [2024-11-20 07:18:51.791439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:27.733 [2024-11-20 07:18:51.791404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.267 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:30.268 [2024-11-20 07:18:54.161200] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60990 has claimed it. 00:21:30.268 request: 00:21:30.268 { 00:21:30.268 "method": "framework_enable_cpumask_locks", 00:21:30.268 "req_id": 1 00:21:30.268 } 00:21:30.268 Got JSON-RPC error response 00:21:30.268 response: 00:21:30.268 { 00:21:30.268 "code": -32603, 00:21:30.268 "message": "Failed to claim CPU core: 2" 00:21:30.268 } 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60990 /var/tmp/spdk.sock 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60990 ']' 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61019 /var/tmp/spdk2.sock 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61019 ']' 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:21:30.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.268 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:30.527 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.527 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:30.527 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:21:30.527 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:21:30.527 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:21:30.527 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:21:30.527 00:21:30.527 real 0m5.214s 00:21:30.527 user 0m1.734s 00:21:30.527 sys 0m0.280s 00:21:30.527 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.527 07:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:30.527 ************************************ 00:21:30.527 END TEST locking_overlapped_coremask_via_rpc 00:21:30.527 ************************************ 00:21:30.785 07:18:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:21:30.785 07:18:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60990 ]] 00:21:30.785 07:18:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60990 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60990 ']' 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60990 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60990 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.785 killing process with pid 60990 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60990' 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60990 00:21:30.785 07:18:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60990 00:21:34.070 07:18:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61019 ]] 00:21:34.070 07:18:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61019 00:21:34.070 07:18:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61019 ']' 00:21:34.070 07:18:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61019 00:21:34.070 07:18:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:21:34.070 07:18:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.070 07:18:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61019 00:21:34.070 killing process with pid 61019 00:21:34.070 07:18:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:34.070 07:18:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:34.071 07:18:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61019' 00:21:34.071 07:18:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61019 00:21:34.071 07:18:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61019 00:21:36.603 07:19:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:21:36.603 Process with pid 60990 is not found 00:21:36.603 07:19:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:21:36.603 07:19:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60990 ]] 00:21:36.603 07:19:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60990 00:21:36.603 07:19:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60990 ']' 00:21:36.603 07:19:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60990 00:21:36.603 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60990) - No such process 00:21:36.603 07:19:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60990 is not found' 00:21:36.603 07:19:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61019 ]] 00:21:36.603 07:19:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61019 00:21:36.603 Process with pid 61019 is not found 00:21:36.603 07:19:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61019 ']' 00:21:36.603 07:19:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61019 00:21:36.603 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61019) - No such process 00:21:36.603 07:19:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61019 is not found' 00:21:36.603 07:19:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:21:36.603 ************************************ 00:21:36.603 END TEST cpu_locks 00:21:36.603 ************************************ 00:21:36.603 00:21:36.603 real 0m59.804s 00:21:36.603 user 1m42.897s 00:21:36.603 sys 0m8.408s 00:21:36.603 07:19:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.603 07:19:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:21:36.603 00:21:36.603 real 1m34.094s 00:21:36.603 user 2m50.622s 00:21:36.603 sys 0m13.673s 00:21:36.603 ************************************ 00:21:36.603 END TEST event 00:21:36.603 ************************************ 00:21:36.603 07:19:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.603 07:19:00 event -- common/autotest_common.sh@10 -- # set +x 00:21:36.603 07:19:00 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:21:36.603 07:19:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:36.603 07:19:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.603 07:19:00 -- common/autotest_common.sh@10 -- # set +x 00:21:36.603 ************************************ 00:21:36.603 START TEST thread 00:21:36.603 ************************************ 00:21:36.603 07:19:00 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:21:36.861 * Looking for test storage... 00:21:36.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:21:36.861 07:19:00 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:36.861 07:19:00 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:36.861 07:19:00 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:21:36.861 07:19:00 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:36.861 07:19:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.861 07:19:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.861 07:19:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.861 07:19:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.861 07:19:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.861 07:19:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.861 07:19:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.861 07:19:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.861 07:19:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.861 07:19:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.861 07:19:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.861 07:19:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:21:36.861 07:19:00 thread -- scripts/common.sh@345 -- # : 1 00:21:36.861 07:19:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.861 07:19:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.861 07:19:00 thread -- scripts/common.sh@365 -- # decimal 1 00:21:36.861 07:19:00 thread -- scripts/common.sh@353 -- # local d=1 00:21:36.861 07:19:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.861 07:19:00 thread -- scripts/common.sh@355 -- # echo 1 00:21:36.861 07:19:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.861 07:19:00 thread -- scripts/common.sh@366 -- # decimal 2 00:21:36.861 07:19:00 thread -- scripts/common.sh@353 -- # local d=2 00:21:36.861 07:19:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.861 07:19:00 thread -- scripts/common.sh@355 -- # echo 2 00:21:36.861 07:19:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.861 07:19:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.861 07:19:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.861 07:19:00 thread -- scripts/common.sh@368 -- # return 0 00:21:36.861 07:19:00 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.861 07:19:00 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:36.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.861 --rc genhtml_branch_coverage=1 00:21:36.861 --rc genhtml_function_coverage=1 00:21:36.861 --rc genhtml_legend=1 00:21:36.861 --rc geninfo_all_blocks=1 00:21:36.861 --rc geninfo_unexecuted_blocks=1 00:21:36.861 00:21:36.861 ' 00:21:36.861 07:19:00 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:36.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.861 --rc genhtml_branch_coverage=1 00:21:36.861 --rc genhtml_function_coverage=1 00:21:36.861 --rc genhtml_legend=1 00:21:36.861 --rc geninfo_all_blocks=1 00:21:36.861 --rc geninfo_unexecuted_blocks=1 00:21:36.861 00:21:36.861 ' 00:21:36.861 07:19:00 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:36.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.861 --rc genhtml_branch_coverage=1 00:21:36.861 --rc genhtml_function_coverage=1 00:21:36.861 --rc genhtml_legend=1 00:21:36.861 --rc geninfo_all_blocks=1 00:21:36.861 --rc geninfo_unexecuted_blocks=1 00:21:36.861 00:21:36.862 ' 00:21:36.862 07:19:00 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:36.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.862 --rc genhtml_branch_coverage=1 00:21:36.862 --rc genhtml_function_coverage=1 00:21:36.862 --rc genhtml_legend=1 00:21:36.862 --rc geninfo_all_blocks=1 00:21:36.862 --rc geninfo_unexecuted_blocks=1 00:21:36.862 00:21:36.862 ' 00:21:36.862 07:19:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:21:36.862 07:19:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:21:36.862 07:19:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.862 07:19:00 thread -- common/autotest_common.sh@10 -- # set +x 00:21:36.862 ************************************ 00:21:36.862 START TEST thread_poller_perf 00:21:36.862 ************************************ 00:21:36.862 07:19:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:21:36.862 [2024-11-20 07:19:01.043924] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:36.862 [2024-11-20 07:19:01.044338] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61225 ] 00:21:37.119 [2024-11-20 07:19:01.238191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.378 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:21:37.378 [2024-11-20 07:19:01.374056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.811 [2024-11-20T07:19:03.014Z] ====================================== 00:21:38.811 [2024-11-20T07:19:03.014Z] busy:2109945526 (cyc) 00:21:38.811 [2024-11-20T07:19:03.014Z] total_run_count: 328000 00:21:38.811 [2024-11-20T07:19:03.014Z] tsc_hz: 2100000000 (cyc) 00:21:38.811 [2024-11-20T07:19:03.014Z] ====================================== 00:21:38.811 [2024-11-20T07:19:03.014Z] poller_cost: 6432 (cyc), 3062 (nsec) 00:21:38.811 00:21:38.811 real 0m1.657s 00:21:38.811 user 0m1.437s 00:21:38.811 sys 0m0.108s 00:21:38.812 07:19:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.812 07:19:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:21:38.812 ************************************ 00:21:38.812 END TEST thread_poller_perf 00:21:38.812 ************************************ 00:21:38.812 07:19:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:21:38.812 07:19:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:21:38.812 07:19:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.812 07:19:02 thread -- common/autotest_common.sh@10 -- # set +x 00:21:38.812 ************************************ 00:21:38.812 START TEST thread_poller_perf 00:21:38.812 ************************************ 00:21:38.812 07:19:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:21:38.812 [2024-11-20 07:19:02.751331] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:38.812 [2024-11-20 07:19:02.751488] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61267 ] 00:21:38.812 [2024-11-20 07:19:02.938106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.070 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:21:39.070 [2024-11-20 07:19:03.075729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.444 [2024-11-20T07:19:04.647Z] ====================================== 00:21:40.444 [2024-11-20T07:19:04.647Z] busy:2103689918 (cyc) 00:21:40.444 [2024-11-20T07:19:04.647Z] total_run_count: 4148000 00:21:40.444 [2024-11-20T07:19:04.647Z] tsc_hz: 2100000000 (cyc) 00:21:40.444 [2024-11-20T07:19:04.647Z] ====================================== 00:21:40.444 [2024-11-20T07:19:04.647Z] poller_cost: 507 (cyc), 241 (nsec) 00:21:40.444 ************************************ 00:21:40.444 END TEST thread_poller_perf 00:21:40.444 ************************************ 00:21:40.444 00:21:40.444 real 0m1.643s 00:21:40.444 user 0m1.418s 00:21:40.444 sys 0m0.115s 00:21:40.444 07:19:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.444 07:19:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:21:40.444 07:19:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:21:40.444 ************************************ 00:21:40.444 END TEST thread 00:21:40.444 ************************************ 00:21:40.444 00:21:40.444 real 0m3.609s 00:21:40.444 user 0m2.995s 00:21:40.444 sys 0m0.399s 00:21:40.444 07:19:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.444 07:19:04 thread -- common/autotest_common.sh@10 -- # set +x 00:21:40.444 07:19:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:21:40.444 07:19:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:21:40.444 07:19:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:40.444 07:19:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.444 07:19:04 -- common/autotest_common.sh@10 -- # set +x 00:21:40.444 ************************************ 00:21:40.444 START TEST app_cmdline 00:21:40.444 ************************************ 00:21:40.444 07:19:04 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:21:40.444 * Looking for test storage... 00:21:40.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.445 07:19:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.445 --rc genhtml_branch_coverage=1 00:21:40.445 --rc genhtml_function_coverage=1 00:21:40.445 --rc genhtml_legend=1 00:21:40.445 --rc geninfo_all_blocks=1 00:21:40.445 --rc geninfo_unexecuted_blocks=1 00:21:40.445 00:21:40.445 ' 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.445 --rc genhtml_branch_coverage=1 00:21:40.445 --rc genhtml_function_coverage=1 00:21:40.445 --rc genhtml_legend=1 00:21:40.445 --rc geninfo_all_blocks=1 00:21:40.445 --rc geninfo_unexecuted_blocks=1 00:21:40.445 00:21:40.445 ' 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.445 --rc genhtml_branch_coverage=1 00:21:40.445 --rc genhtml_function_coverage=1 00:21:40.445 --rc genhtml_legend=1 00:21:40.445 --rc geninfo_all_blocks=1 00:21:40.445 --rc geninfo_unexecuted_blocks=1 00:21:40.445 00:21:40.445 ' 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.445 --rc genhtml_branch_coverage=1 00:21:40.445 --rc genhtml_function_coverage=1 00:21:40.445 --rc genhtml_legend=1 00:21:40.445 --rc geninfo_all_blocks=1 00:21:40.445 --rc geninfo_unexecuted_blocks=1 00:21:40.445 00:21:40.445 ' 00:21:40.445 07:19:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:21:40.445 07:19:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61351 00:21:40.445 07:19:04 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:21:40.445 07:19:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61351 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61351 ']' 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.445 07:19:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:40.703 [2024-11-20 07:19:04.806901] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:40.703 [2024-11-20 07:19:04.807462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61351 ] 00:21:40.962 [2024-11-20 07:19:05.014270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.220 [2024-11-20 07:19:05.202948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.153 07:19:06 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.153 07:19:06 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:21:42.153 07:19:06 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:21:42.744 { 00:21:42.744 "version": "SPDK v25.01-pre git sha1 400f484f7", 00:21:42.744 "fields": { 00:21:42.744 "major": 25, 00:21:42.744 "minor": 1, 00:21:42.744 "patch": 0, 00:21:42.744 "suffix": "-pre", 00:21:42.744 "commit": "400f484f7" 00:21:42.744 } 00:21:42.744 } 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:21:42.744 07:19:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:42.744 07:19:06 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:21:43.004 request: 00:21:43.004 { 00:21:43.004 "method": "env_dpdk_get_mem_stats", 00:21:43.004 "req_id": 1 00:21:43.004 } 00:21:43.004 Got JSON-RPC error response 00:21:43.004 response: 00:21:43.004 { 00:21:43.004 "code": -32601, 00:21:43.004 "message": "Method not found" 00:21:43.004 } 00:21:43.004 07:19:07 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:21:43.004 07:19:07 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.005 07:19:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61351 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61351 ']' 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61351 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61351 00:21:43.005 killing process with pid 61351 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61351' 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 61351 00:21:43.005 07:19:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 61351 00:21:46.289 00:21:46.289 real 0m5.542s 00:21:46.289 user 0m6.069s 00:21:46.289 sys 0m0.745s 00:21:46.289 07:19:09 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.289 07:19:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:21:46.289 ************************************ 00:21:46.289 END TEST app_cmdline 00:21:46.289 ************************************ 00:21:46.289 07:19:10 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:21:46.289 07:19:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:46.289 07:19:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.289 07:19:10 -- common/autotest_common.sh@10 -- # set +x 00:21:46.289 ************************************ 00:21:46.289 START TEST version 00:21:46.289 ************************************ 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:21:46.289 * Looking for test storage... 00:21:46.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.289 07:19:10 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.289 07:19:10 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.289 07:19:10 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.289 07:19:10 version -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.289 07:19:10 version -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.289 07:19:10 version -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.289 07:19:10 version -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.289 07:19:10 version -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.289 07:19:10 version -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.289 07:19:10 version -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.289 07:19:10 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.289 07:19:10 version -- scripts/common.sh@344 -- # case "$op" in 00:21:46.289 07:19:10 version -- scripts/common.sh@345 -- # : 1 00:21:46.289 07:19:10 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.289 07:19:10 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.289 07:19:10 version -- scripts/common.sh@365 -- # decimal 1 00:21:46.289 07:19:10 version -- scripts/common.sh@353 -- # local d=1 00:21:46.289 07:19:10 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.289 07:19:10 version -- scripts/common.sh@355 -- # echo 1 00:21:46.289 07:19:10 version -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.289 07:19:10 version -- scripts/common.sh@366 -- # decimal 2 00:21:46.289 07:19:10 version -- scripts/common.sh@353 -- # local d=2 00:21:46.289 07:19:10 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.289 07:19:10 version -- scripts/common.sh@355 -- # echo 2 00:21:46.289 07:19:10 version -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.289 07:19:10 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.289 07:19:10 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.289 07:19:10 version -- scripts/common.sh@368 -- # return 0 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.289 --rc genhtml_branch_coverage=1 00:21:46.289 --rc genhtml_function_coverage=1 00:21:46.289 --rc genhtml_legend=1 00:21:46.289 --rc geninfo_all_blocks=1 00:21:46.289 --rc geninfo_unexecuted_blocks=1 00:21:46.289 00:21:46.289 ' 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.289 --rc genhtml_branch_coverage=1 00:21:46.289 --rc genhtml_function_coverage=1 00:21:46.289 --rc genhtml_legend=1 00:21:46.289 --rc geninfo_all_blocks=1 00:21:46.289 --rc geninfo_unexecuted_blocks=1 00:21:46.289 00:21:46.289 ' 00:21:46.289 07:19:10 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.289 --rc genhtml_branch_coverage=1 00:21:46.289 --rc genhtml_function_coverage=1 00:21:46.289 --rc genhtml_legend=1 00:21:46.289 --rc geninfo_all_blocks=1 00:21:46.289 --rc geninfo_unexecuted_blocks=1 00:21:46.290 00:21:46.290 ' 00:21:46.290 07:19:10 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.290 --rc genhtml_branch_coverage=1 00:21:46.290 --rc genhtml_function_coverage=1 00:21:46.290 --rc genhtml_legend=1 00:21:46.290 --rc geninfo_all_blocks=1 00:21:46.290 --rc geninfo_unexecuted_blocks=1 00:21:46.290 00:21:46.290 ' 00:21:46.290 07:19:10 version -- app/version.sh@17 -- # get_header_version major 00:21:46.290 07:19:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:46.290 07:19:10 version -- app/version.sh@14 -- # cut -f2 00:21:46.290 07:19:10 version -- app/version.sh@14 -- # tr -d '"' 00:21:46.290 07:19:10 version -- app/version.sh@17 -- # major=25 00:21:46.290 07:19:10 version -- app/version.sh@18 -- # get_header_version minor 00:21:46.290 07:19:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:46.290 07:19:10 version -- app/version.sh@14 -- # cut -f2 00:21:46.290 07:19:10 version -- app/version.sh@14 -- # tr -d '"' 00:21:46.290 07:19:10 version -- app/version.sh@18 -- # minor=1 00:21:46.290 07:19:10 version -- app/version.sh@19 -- # get_header_version patch 00:21:46.290 07:19:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:46.290 07:19:10 version -- app/version.sh@14 -- # cut -f2 00:21:46.290 07:19:10 version -- app/version.sh@14 -- # tr -d '"' 00:21:46.290 07:19:10 version -- app/version.sh@19 -- # patch=0 00:21:46.290 07:19:10 version -- app/version.sh@20 -- # get_header_version suffix 00:21:46.290 07:19:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:21:46.290 07:19:10 version -- app/version.sh@14 -- # cut -f2 00:21:46.290 07:19:10 version -- app/version.sh@14 -- # tr -d '"' 00:21:46.290 07:19:10 version -- app/version.sh@20 -- # suffix=-pre 00:21:46.290 07:19:10 version -- app/version.sh@22 -- # version=25.1 00:21:46.290 07:19:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:21:46.290 07:19:10 version -- app/version.sh@28 -- # version=25.1rc0 00:21:46.290 07:19:10 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:21:46.290 07:19:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:21:46.290 07:19:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:21:46.290 07:19:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:21:46.290 00:21:46.290 real 0m0.263s 00:21:46.290 user 0m0.181s 00:21:46.290 sys 0m0.119s 00:21:46.290 07:19:10 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.290 07:19:10 version -- common/autotest_common.sh@10 -- # set +x 00:21:46.290 ************************************ 00:21:46.290 END TEST version 00:21:46.290 ************************************ 00:21:46.290 07:19:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:21:46.290 07:19:10 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:21:46.290 07:19:10 -- spdk/autotest.sh@194 -- # uname -s 00:21:46.290 07:19:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:46.290 07:19:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:46.290 07:19:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:46.290 07:19:10 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:21:46.290 07:19:10 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:21:46.290 07:19:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.290 07:19:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.290 07:19:10 -- common/autotest_common.sh@10 -- # set +x 00:21:46.290 ************************************ 00:21:46.290 START TEST blockdev_nvme 00:21:46.290 ************************************ 00:21:46.290 07:19:10 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:21:46.290 * Looking for test storage... 00:21:46.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:46.290 07:19:10 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.290 07:19:10 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.290 07:19:10 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.548 07:19:10 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.548 07:19:10 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.549 07:19:10 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.549 --rc genhtml_branch_coverage=1 00:21:46.549 --rc genhtml_function_coverage=1 00:21:46.549 --rc genhtml_legend=1 00:21:46.549 --rc geninfo_all_blocks=1 00:21:46.549 --rc geninfo_unexecuted_blocks=1 00:21:46.549 00:21:46.549 ' 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.549 --rc genhtml_branch_coverage=1 00:21:46.549 --rc genhtml_function_coverage=1 00:21:46.549 --rc genhtml_legend=1 00:21:46.549 --rc geninfo_all_blocks=1 00:21:46.549 --rc geninfo_unexecuted_blocks=1 00:21:46.549 00:21:46.549 ' 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.549 --rc genhtml_branch_coverage=1 00:21:46.549 --rc genhtml_function_coverage=1 00:21:46.549 --rc genhtml_legend=1 00:21:46.549 --rc geninfo_all_blocks=1 00:21:46.549 --rc geninfo_unexecuted_blocks=1 00:21:46.549 00:21:46.549 ' 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.549 --rc genhtml_branch_coverage=1 00:21:46.549 --rc genhtml_function_coverage=1 00:21:46.549 --rc genhtml_legend=1 00:21:46.549 --rc geninfo_all_blocks=1 00:21:46.549 --rc geninfo_unexecuted_blocks=1 00:21:46.549 00:21:46.549 ' 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:46.549 07:19:10 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61556 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:46.549 07:19:10 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61556 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61556 ']' 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.549 07:19:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:46.549 [2024-11-20 07:19:10.737209] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:46.549 [2024-11-20 07:19:10.737452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61556 ] 00:21:46.807 [2024-11-20 07:19:10.940300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.065 [2024-11-20 07:19:11.128323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.001 07:19:12 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.001 07:19:12 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:21:48.001 07:19:12 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:21:48.001 07:19:12 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:21:48.001 07:19:12 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:21:48.001 07:19:12 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:21:48.001 07:19:12 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:48.259 07:19:12 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:21:48.259 07:19:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.259 07:19:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.550 07:19:12 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.550 07:19:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:21:48.550 07:19:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.550 07:19:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.550 07:19:12 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.550 07:19:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:48.550 07:19:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:48.550 07:19:12 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.550 07:19:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:48.840 07:19:12 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.840 07:19:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:48.840 07:19:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:48.840 07:19:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1c44c92c-4474-4507-9436-b943390efcbc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1c44c92c-4474-4507-9436-b943390efcbc",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "9f62f6f0-9fe5-4b99-9ef2-7b7da8b76c4f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9f62f6f0-9fe5-4b99-9ef2-7b7da8b76c4f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "72f765a0-5789-4d3f-ab9e-6b4588db4797"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72f765a0-5789-4d3f-ab9e-6b4588db4797",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b77d64ae-7782-47f5-921c-34843f1359c4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b77d64ae-7782-47f5-921c-34843f1359c4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "cd58e712-dbc6-40b0-b226-014801864679"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd58e712-dbc6-40b0-b226-014801864679",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0119e9ed-a32f-48e6-879d-dfbbbbc78efb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0119e9ed-a32f-48e6-879d-dfbbbbc78efb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:21:48.841 07:19:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:48.841 07:19:12 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:21:48.841 07:19:12 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:48.841 07:19:12 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61556 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61556 ']' 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61556 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61556 00:21:48.841 killing process with pid 61556 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61556' 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61556 00:21:48.841 07:19:12 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61556 00:21:52.125 07:19:15 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:52.125 07:19:15 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:21:52.125 07:19:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:52.125 07:19:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.125 07:19:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:52.125 ************************************ 00:21:52.125 START TEST bdev_hello_world 00:21:52.125 ************************************ 00:21:52.125 07:19:15 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:21:52.125 [2024-11-20 07:19:15.832647] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:52.125 [2024-11-20 07:19:15.832835] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61662 ] 00:21:52.125 [2024-11-20 07:19:16.015290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.125 [2024-11-20 07:19:16.176127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.060 [2024-11-20 07:19:16.980167] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:53.060 [2024-11-20 07:19:16.980250] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:21:53.060 [2024-11-20 07:19:16.980289] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:53.060 [2024-11-20 07:19:16.984324] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:53.060 [2024-11-20 07:19:16.984867] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:53.060 [2024-11-20 07:19:16.984912] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:53.060 [2024-11-20 07:19:16.985113] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:53.060 00:21:53.060 [2024-11-20 07:19:16.985159] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:54.436 00:21:54.436 real 0m2.720s 00:21:54.436 user 0m2.265s 00:21:54.436 sys 0m0.338s 00:21:54.436 07:19:18 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.436 07:19:18 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:54.436 ************************************ 00:21:54.436 END TEST bdev_hello_world 00:21:54.436 ************************************ 00:21:54.436 07:19:18 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:54.436 07:19:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:54.436 07:19:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.436 07:19:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:54.436 ************************************ 00:21:54.436 START TEST bdev_bounds 00:21:54.436 ************************************ 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61704 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:54.436 Process bdevio pid: 61704 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61704' 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61704 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61704 ']' 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.436 07:19:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:54.436 [2024-11-20 07:19:18.626991] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:54.436 [2024-11-20 07:19:18.627157] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61704 ] 00:21:54.694 [2024-11-20 07:19:18.820749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:54.953 [2024-11-20 07:19:19.043409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.953 [2024-11-20 07:19:19.043501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.953 [2024-11-20 07:19:19.043521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.888 07:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.888 07:19:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:55.888 07:19:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:56.147 I/O targets: 00:21:56.147 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:21:56.147 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:21:56.147 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:56.147 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:56.147 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:56.147 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:21:56.147 00:21:56.147 00:21:56.147 CUnit - A unit testing framework for C - Version 2.1-3 00:21:56.147 http://cunit.sourceforge.net/ 00:21:56.147 00:21:56.147 00:21:56.147 Suite: bdevio tests on: Nvme3n1 00:21:56.147 Test: blockdev write read block ...passed 00:21:56.147 Test: blockdev write zeroes read block ...passed 00:21:56.147 Test: blockdev write zeroes read no split ...passed 00:21:56.147 Test: blockdev write zeroes read split ...passed 00:21:56.147 Test: blockdev write zeroes read split partial ...passed 00:21:56.147 Test: blockdev reset ...[2024-11-20 07:19:20.181420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:21:56.147 [2024-11-20 07:19:20.186569] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:21:56.147 passed 00:21:56.147 Test: blockdev write read 8 blocks ...passed 00:21:56.147 Test: blockdev write read size > 128k ...passed 00:21:56.147 Test: blockdev write read invalid size ...passed 00:21:56.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:56.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:56.147 Test: blockdev write read max offset ...passed 00:21:56.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:56.147 Test: blockdev writev readv 8 blocks ...passed 00:21:56.147 Test: blockdev writev readv 30 x 1block ...passed 00:21:56.147 Test: blockdev writev readv block ...passed 00:21:56.147 Test: blockdev writev readv size > 128k ...passed 00:21:56.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:56.147 Test: blockdev comparev and writev ...[2024-11-20 07:19:20.195612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b980a000 len:0x1000 00:21:56.147 [2024-11-20 07:19:20.195734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:56.147 passed 00:21:56.147 Test: blockdev nvme passthru rw ...passed 00:21:56.147 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:19:20.196599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:56.147 [2024-11-20 07:19:20.196652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:56.147 passed 00:21:56.147 Test: blockdev nvme admin passthru ...passed 00:21:56.147 Test: blockdev copy ...passed 00:21:56.147 Suite: bdevio tests on: Nvme2n3 00:21:56.147 Test: blockdev write read block ...passed 00:21:56.147 Test: blockdev write zeroes read block ...passed 00:21:56.147 Test: blockdev write zeroes read no split ...passed 00:21:56.147 Test: blockdev write zeroes read split ...passed 00:21:56.147 Test: blockdev write zeroes read split partial ...passed 00:21:56.147 Test: blockdev reset ...[2024-11-20 07:19:20.284388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:21:56.147 [2024-11-20 07:19:20.289671] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:21:56.147 passed 00:21:56.147 Test: blockdev write read 8 blocks ...passed 00:21:56.147 Test: blockdev write read size > 128k ...passed 00:21:56.147 Test: blockdev write read invalid size ...passed 00:21:56.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:56.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:56.147 Test: blockdev write read max offset ...passed 00:21:56.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:56.147 Test: blockdev writev readv 8 blocks ...passed 00:21:56.147 Test: blockdev writev readv 30 x 1block ...passed 00:21:56.147 Test: blockdev writev readv block ...passed 00:21:56.147 Test: blockdev writev readv size > 128k ...passed 00:21:56.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:56.147 Test: blockdev comparev and writev ...[2024-11-20 07:19:20.298010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29d206000 len:0x1000 00:21:56.147 [2024-11-20 07:19:20.298127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:56.147 passed 00:21:56.147 Test: blockdev nvme passthru rw ...passed 00:21:56.147 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:19:20.298956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:56.147 passed 00:21:56.147 Test: blockdev nvme admin passthru ...[2024-11-20 07:19:20.299003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:56.147 passed 00:21:56.147 Test: blockdev copy ...passed 00:21:56.147 Suite: bdevio tests on: Nvme2n2 00:21:56.147 Test: blockdev write read block ...passed 00:21:56.147 Test: blockdev write zeroes read block ...passed 00:21:56.147 Test: blockdev write zeroes read no split ...passed 00:21:56.406 Test: blockdev write zeroes read split ...passed 00:21:56.406 Test: blockdev write zeroes read split partial ...passed 00:21:56.406 Test: blockdev reset ...[2024-11-20 07:19:20.393030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:21:56.406 [2024-11-20 07:19:20.398151] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:21:56.406 passed 00:21:56.406 Test: blockdev write read 8 blocks ...passed 00:21:56.406 Test: blockdev write read size > 128k ...passed 00:21:56.406 Test: blockdev write read invalid size ...passed 00:21:56.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:56.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:56.406 Test: blockdev write read max offset ...passed 00:21:56.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:56.406 Test: blockdev writev readv 8 blocks ...passed 00:21:56.406 Test: blockdev writev readv 30 x 1block ...passed 00:21:56.406 Test: blockdev writev readv block ...passed 00:21:56.406 Test: blockdev writev readv size > 128k ...passed 00:21:56.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:56.406 Test: blockdev comparev and writev ...[2024-11-20 07:19:20.407578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d503c000 len:0x1000 00:21:56.406 [2024-11-20 07:19:20.407688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:56.406 passed 00:21:56.406 Test: blockdev nvme passthru rw ...passed 00:21:56.406 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:19:20.408603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:56.406 passed[2024-11-20 07:19:20.408653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:56.406 00:21:56.406 Test: blockdev nvme admin passthru ...passed 00:21:56.406 Test: blockdev copy ...passed 00:21:56.406 Suite: bdevio tests on: Nvme2n1 00:21:56.406 Test: blockdev write read block ...passed 00:21:56.406 Test: blockdev write zeroes read block ...passed 00:21:56.406 Test: blockdev write zeroes read no split ...passed 00:21:56.406 Test: blockdev write zeroes read split ...passed 00:21:56.406 Test: blockdev write zeroes read split partial ...passed 00:21:56.406 Test: blockdev reset ...[2024-11-20 07:19:20.499850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:21:56.406 [2024-11-20 07:19:20.505982] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:21:56.406 passed 00:21:56.406 Test: blockdev write read 8 blocks ...passed 00:21:56.406 Test: blockdev write read size > 128k ...passed 00:21:56.406 Test: blockdev write read invalid size ...passed 00:21:56.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:56.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:56.406 Test: blockdev write read max offset ...passed 00:21:56.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:56.406 Test: blockdev writev readv 8 blocks ...passed 00:21:56.406 Test: blockdev writev readv 30 x 1block ...passed 00:21:56.406 Test: blockdev writev readv block ...passed 00:21:56.406 Test: blockdev writev readv size > 128k ...passed 00:21:56.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:56.406 Test: blockdev comparev and writev ...[2024-11-20 07:19:20.514793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5038000 len:0x1000 00:21:56.406 [2024-11-20 07:19:20.514947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:56.406 passed 00:21:56.406 Test: blockdev nvme passthru rw ...passed 00:21:56.406 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:19:20.515913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:56.406 [2024-11-20 07:19:20.515981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:56.406 passed 00:21:56.406 Test: blockdev nvme admin passthru ...passed 00:21:56.406 Test: blockdev copy ...passed 00:21:56.406 Suite: bdevio tests on: Nvme1n1 00:21:56.406 Test: blockdev write read block ...passed 00:21:56.406 Test: blockdev write zeroes read block ...passed 00:21:56.406 Test: blockdev write zeroes read no split ...passed 00:21:56.406 Test: blockdev write zeroes read split ...passed 00:21:56.406 Test: blockdev write zeroes read split partial ...passed 00:21:56.406 Test: blockdev reset ...[2024-11-20 07:19:20.606234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:21:56.664 [2024-11-20 07:19:20.611867] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:21:56.664 passed 00:21:56.664 Test: blockdev write read 8 blocks ...passed 00:21:56.664 Test: blockdev write read size > 128k ...passed 00:21:56.664 Test: blockdev write read invalid size ...passed 00:21:56.664 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:56.664 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:56.664 Test: blockdev write read max offset ...passed 00:21:56.664 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:56.664 Test: blockdev writev readv 8 blocks ...passed 00:21:56.664 Test: blockdev writev readv 30 x 1block ...passed 00:21:56.665 Test: blockdev writev readv block ...passed 00:21:56.665 Test: blockdev writev readv size > 128k ...passed 00:21:56.665 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:56.665 Test: blockdev comparev and writev ...[2024-11-20 07:19:20.622147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5034000 len:0x1000 00:21:56.665 [2024-11-20 07:19:20.622282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:56.665 passed 00:21:56.665 Test: blockdev nvme passthru rw ...passed 00:21:56.665 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:19:20.623290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:21:56.665 Test: blockdev nvme admin passthru ...RP2 0x0 00:21:56.665 [2024-11-20 07:19:20.623601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:56.665 passed 00:21:56.665 Test: blockdev copy ...passed 00:21:56.665 Suite: bdevio tests on: Nvme0n1 00:21:56.665 Test: blockdev write read block ...passed 00:21:56.665 Test: blockdev write zeroes read block ...passed 00:21:56.665 Test: blockdev write zeroes read no split ...passed 00:21:56.665 Test: blockdev write zeroes read split ...passed 00:21:56.665 Test: blockdev write zeroes read split partial ...passed 00:21:56.665 Test: blockdev reset ...[2024-11-20 07:19:20.713404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:21:56.665 [2024-11-20 07:19:20.719503] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:21:56.665 Test: blockdev write read 8 blocks ...uccessful. 00:21:56.665 passed 00:21:56.665 Test: blockdev write read size > 128k ...passed 00:21:56.665 Test: blockdev write read invalid size ...passed 00:21:56.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:56.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:56.665 Test: blockdev write read max offset ...passed 00:21:56.665 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:56.665 Test: blockdev writev readv 8 blocks ...passed 00:21:56.665 Test: blockdev writev readv 30 x 1block ...passed 00:21:56.665 Test: blockdev writev readv block ...passed 00:21:56.665 Test: blockdev writev readv size > 128k ...passed 00:21:56.665 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:56.665 Test: blockdev comparev and writev ...passed 00:21:56.665 Test: blockdev nvme passthru rw ...[2024-11-20 07:19:20.729222] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:21:56.665 separate metadata which is not supported yet. 00:21:56.665 passed 00:21:56.665 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:19:20.729996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:21:56.665 Test: blockdev nvme admin passthru ...RP2 0x0 00:21:56.665 [2024-11-20 07:19:20.730312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:21:56.665 passed 00:21:56.665 Test: blockdev copy ...passed 00:21:56.665 00:21:56.665 Run Summary: Type Total Ran Passed Failed Inactive 00:21:56.665 suites 6 6 n/a 0 0 00:21:56.665 tests 138 138 138 0 0 00:21:56.665 asserts 893 893 893 0 n/a 00:21:56.665 00:21:56.665 Elapsed time = 1.766 seconds 00:21:56.665 0 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61704 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61704 ']' 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61704 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61704 00:21:56.665 killing process with pid 61704 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61704' 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61704 00:21:56.665 07:19:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61704 00:21:58.097 ************************************ 00:21:58.097 END TEST bdev_bounds 00:21:58.097 ************************************ 00:21:58.097 07:19:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:58.097 00:21:58.097 real 0m3.528s 00:21:58.097 user 0m9.329s 00:21:58.097 sys 0m0.513s 00:21:58.098 07:19:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.098 07:19:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 07:19:22 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:21:58.098 07:19:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:58.098 07:19:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:58.098 07:19:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 ************************************ 00:21:58.098 START TEST bdev_nbd 00:21:58.098 ************************************ 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61780 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61780 /var/tmp/spdk-nbd.sock 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61780 ']' 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:58.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.098 07:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:58.098 [2024-11-20 07:19:22.214221] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:21:58.098 [2024-11-20 07:19:22.214624] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.357 [2024-11-20 07:19:22.411708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.615 [2024-11-20 07:19:22.619238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:59.550 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:59.810 1+0 records in 00:21:59.810 1+0 records out 00:21:59.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555114 s, 7.4 MB/s 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:59.810 07:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.069 1+0 records in 00:22:00.069 1+0 records out 00:22:00.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579088 s, 7.1 MB/s 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:00.069 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.329 1+0 records in 00:22:00.329 1+0 records out 00:22:00.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750571 s, 5.5 MB/s 00:22:00.329 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.588 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:00.588 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.588 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:00.588 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:00.588 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:00.588 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:00.588 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:00.846 1+0 records in 00:22:00.846 1+0 records out 00:22:00.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083954 s, 4.9 MB/s 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:00.846 07:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:01.104 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.104 1+0 records in 00:22:01.104 1+0 records out 00:22:01.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00153473 s, 2.7 MB/s 00:22:01.105 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.105 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:01.105 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.105 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:01.105 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:01.105 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:01.105 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:01.105 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.364 1+0 records in 00:22:01.364 1+0 records out 00:22:01.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103359 s, 4.0 MB/s 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:22:01.364 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd0", 00:22:01.622 "bdev_name": "Nvme0n1" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd1", 00:22:01.622 "bdev_name": "Nvme1n1" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd2", 00:22:01.622 "bdev_name": "Nvme2n1" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd3", 00:22:01.622 "bdev_name": "Nvme2n2" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd4", 00:22:01.622 "bdev_name": "Nvme2n3" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd5", 00:22:01.622 "bdev_name": "Nvme3n1" 00:22:01.622 } 00:22:01.622 ]' 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd0", 00:22:01.622 "bdev_name": "Nvme0n1" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd1", 00:22:01.622 "bdev_name": "Nvme1n1" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd2", 00:22:01.622 "bdev_name": "Nvme2n1" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd3", 00:22:01.622 "bdev_name": "Nvme2n2" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd4", 00:22:01.622 "bdev_name": "Nvme2n3" 00:22:01.622 }, 00:22:01.622 { 00:22:01.622 "nbd_device": "/dev/nbd5", 00:22:01.622 "bdev_name": "Nvme3n1" 00:22:01.622 } 00:22:01.622 ]' 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.622 07:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:01.880 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:01.880 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:01.881 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:01.881 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.881 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.881 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:01.881 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:01.881 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.881 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.881 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.139 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.397 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:22:02.655 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:22:02.655 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:22:02.655 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:22:02.655 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:02.655 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:02.655 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:22:02.914 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:02.914 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:02.914 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:02.914 07:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:22:02.914 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:03.173 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:03.480 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:03.755 07:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:22:04.015 /dev/nbd0 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.015 1+0 records in 00:22:04.015 1+0 records out 00:22:04.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614805 s, 6.7 MB/s 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:04.015 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:22:04.273 /dev/nbd1 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.532 1+0 records in 00:22:04.532 1+0 records out 00:22:04.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652595 s, 6.3 MB/s 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:04.532 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:22:04.791 /dev/nbd10 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.791 1+0 records in 00:22:04.791 1+0 records out 00:22:04.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637546 s, 6.4 MB/s 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:04.791 07:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:22:05.051 /dev/nbd11 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:05.309 1+0 records in 00:22:05.309 1+0 records out 00:22:05.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00077905 s, 5.3 MB/s 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:05.309 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:22:05.568 /dev/nbd12 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:05.568 1+0 records in 00:22:05.568 1+0 records out 00:22:05.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524702 s, 7.8 MB/s 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:05.568 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:22:05.827 /dev/nbd13 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:05.827 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:05.828 1+0 records in 00:22:05.828 1+0 records out 00:22:05.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072312 s, 5.7 MB/s 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:05.828 07:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:06.086 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd0", 00:22:06.086 "bdev_name": "Nvme0n1" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd1", 00:22:06.086 "bdev_name": "Nvme1n1" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd10", 00:22:06.086 "bdev_name": "Nvme2n1" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd11", 00:22:06.086 "bdev_name": "Nvme2n2" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd12", 00:22:06.086 "bdev_name": "Nvme2n3" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd13", 00:22:06.086 "bdev_name": "Nvme3n1" 00:22:06.086 } 00:22:06.086 ]' 00:22:06.086 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd0", 00:22:06.086 "bdev_name": "Nvme0n1" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd1", 00:22:06.086 "bdev_name": "Nvme1n1" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd10", 00:22:06.086 "bdev_name": "Nvme2n1" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd11", 00:22:06.086 "bdev_name": "Nvme2n2" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd12", 00:22:06.086 "bdev_name": "Nvme2n3" 00:22:06.086 }, 00:22:06.086 { 00:22:06.086 "nbd_device": "/dev/nbd13", 00:22:06.086 "bdev_name": "Nvme3n1" 00:22:06.086 } 00:22:06.086 ]' 00:22:06.086 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:06.086 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:06.086 /dev/nbd1 00:22:06.086 /dev/nbd10 00:22:06.086 /dev/nbd11 00:22:06.086 /dev/nbd12 00:22:06.086 /dev/nbd13' 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:06.087 /dev/nbd1 00:22:06.087 /dev/nbd10 00:22:06.087 /dev/nbd11 00:22:06.087 /dev/nbd12 00:22:06.087 /dev/nbd13' 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:06.087 256+0 records in 00:22:06.087 256+0 records out 00:22:06.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00731438 s, 143 MB/s 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:06.087 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:06.345 256+0 records in 00:22:06.345 256+0 records out 00:22:06.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15233 s, 6.9 MB/s 00:22:06.345 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:06.345 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:06.603 256+0 records in 00:22:06.604 256+0 records out 00:22:06.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158577 s, 6.6 MB/s 00:22:06.604 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:06.604 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:22:06.604 256+0 records in 00:22:06.604 256+0 records out 00:22:06.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159655 s, 6.6 MB/s 00:22:06.604 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:06.604 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:22:06.863 256+0 records in 00:22:06.863 256+0 records out 00:22:06.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153799 s, 6.8 MB/s 00:22:06.863 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:06.863 07:19:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:22:07.121 256+0 records in 00:22:07.121 256+0 records out 00:22:07.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156326 s, 6.7 MB/s 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:22:07.121 256+0 records in 00:22:07.121 256+0 records out 00:22:07.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156497 s, 6.7 MB/s 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.121 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.687 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:07.945 07:19:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.945 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:08.202 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:08.458 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:08.714 07:19:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:09.279 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:09.536 07:19:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:09.537 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:09.537 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:09.537 07:19:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:10.101 malloc_lvol_verify 00:22:10.101 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:10.359 7596c59f-fee8-41f9-aebf-e7e8bd47a5a8 00:22:10.359 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:10.617 410d3a5a-4d11-4384-aaf5-f99c59410881 00:22:10.617 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:10.875 /dev/nbd0 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:10.875 mke2fs 1.47.0 (5-Feb-2023) 00:22:10.875 Discarding device blocks: 0/4096 done 00:22:10.875 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:10.875 00:22:10.875 Allocating group tables: 0/1 done 00:22:10.875 Writing inode tables: 0/1 done 00:22:10.875 Creating journal (1024 blocks): done 00:22:10.875 Writing superblocks and filesystem accounting information: 0/1 done 00:22:10.875 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:10.875 07:19:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61780 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61780 ']' 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61780 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61780 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:11.132 killing process with pid 61780 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61780' 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61780 00:22:11.132 07:19:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61780 00:22:13.164 07:19:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:13.164 00:22:13.164 real 0m14.696s 00:22:13.164 user 0m19.756s 00:22:13.164 sys 0m5.585s 00:22:13.164 07:19:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.164 ************************************ 00:22:13.164 END TEST bdev_nbd 00:22:13.164 ************************************ 00:22:13.164 07:19:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:13.164 07:19:36 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:22:13.164 07:19:36 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:22:13.164 skipping fio tests on NVMe due to multi-ns failures. 00:22:13.164 07:19:36 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:22:13.164 07:19:36 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:13.164 07:19:36 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:13.164 07:19:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:13.164 07:19:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.164 07:19:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:22:13.164 ************************************ 00:22:13.164 START TEST bdev_verify 00:22:13.164 ************************************ 00:22:13.164 07:19:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:13.164 [2024-11-20 07:19:36.930580] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:13.164 [2024-11-20 07:19:36.930726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62198 ] 00:22:13.164 [2024-11-20 07:19:37.122392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:13.164 [2024-11-20 07:19:37.258913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.164 [2024-11-20 07:19:37.258940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.099 Running I/O for 5 seconds... 00:22:16.410 16128.00 IOPS, 63.00 MiB/s [2024-11-20T07:19:41.548Z] 15744.00 IOPS, 61.50 MiB/s [2024-11-20T07:19:42.483Z] 15914.67 IOPS, 62.17 MiB/s [2024-11-20T07:19:43.426Z] 16512.00 IOPS, 64.50 MiB/s [2024-11-20T07:19:43.426Z] 17113.60 IOPS, 66.85 MiB/s 00:22:19.223 Latency(us) 00:22:19.223 [2024-11-20T07:19:43.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.223 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x0 length 0xbd0bd 00:22:19.223 Nvme0n1 : 5.05 1419.71 5.55 0.00 0.00 89891.31 18474.91 83386.76 00:22:19.223 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:22:19.223 Nvme0n1 : 5.08 1385.98 5.41 0.00 0.00 91995.65 17226.61 101362.35 00:22:19.223 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x0 length 0xa0000 00:22:19.223 Nvme1n1 : 5.05 1418.96 5.54 0.00 0.00 89764.87 18599.74 79891.50 00:22:19.223 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0xa0000 length 0xa0000 00:22:19.223 Nvme1n1 : 5.08 1385.59 5.41 0.00 0.00 91646.21 19972.88 82887.44 00:22:19.223 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x0 length 0x80000 00:22:19.223 Nvme2n1 : 5.05 1418.22 5.54 0.00 0.00 89598.20 16727.28 79891.50 00:22:19.223 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x80000 length 0x80000 00:22:19.223 Nvme2n1 : 5.08 1385.23 5.41 0.00 0.00 91392.80 19473.55 74898.29 00:22:19.223 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x0 length 0x80000 00:22:19.223 Nvme2n2 : 5.06 1417.48 5.54 0.00 0.00 89409.38 16103.13 80390.83 00:22:19.223 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x80000 length 0x80000 00:22:19.223 Nvme2n2 : 5.08 1384.75 5.41 0.00 0.00 91153.58 19598.38 74398.96 00:22:19.223 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x0 length 0x80000 00:22:19.223 Nvme2n3 : 5.09 1434.40 5.60 0.00 0.00 88274.96 11234.74 80890.15 00:22:19.223 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x80000 length 0x80000 00:22:19.223 Nvme2n3 : 5.10 1394.26 5.45 0.00 0.00 90409.97 5867.03 77394.90 00:22:19.223 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x0 length 0x20000 00:22:19.223 Nvme3n1 : 5.09 1434.01 5.60 0.00 0.00 88087.41 9175.04 83386.76 00:22:19.223 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:19.223 Verification LBA range: start 0x20000 length 0x20000 00:22:19.223 Nvme3n1 : 5.10 1393.87 5.44 0.00 0.00 90320.86 6241.52 82388.11 00:22:19.223 [2024-11-20T07:19:43.426Z] =================================================================================================================== 00:22:19.223 [2024-11-20T07:19:43.426Z] Total : 16872.46 65.91 0.00 0.00 90147.59 5867.03 101362.35 00:22:21.125 00:22:21.125 real 0m7.996s 00:22:21.125 user 0m14.690s 00:22:21.125 sys 0m0.342s 00:22:21.125 07:19:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.125 07:19:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:21.125 ************************************ 00:22:21.125 END TEST bdev_verify 00:22:21.125 ************************************ 00:22:21.125 07:19:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:21.125 07:19:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:21.125 07:19:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.125 07:19:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:22:21.125 ************************************ 00:22:21.125 START TEST bdev_verify_big_io 00:22:21.125 ************************************ 00:22:21.125 07:19:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:21.125 [2024-11-20 07:19:44.977099] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:21.125 [2024-11-20 07:19:44.977264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62307 ] 00:22:21.125 [2024-11-20 07:19:45.157740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:21.125 [2024-11-20 07:19:45.298829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.125 [2024-11-20 07:19:45.298868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.070 Running I/O for 5 seconds... 00:22:26.582 1187.00 IOPS, 74.19 MiB/s [2024-11-20T07:19:52.162Z] 2166.50 IOPS, 135.41 MiB/s [2024-11-20T07:19:52.162Z] 2502.00 IOPS, 156.38 MiB/s [2024-11-20T07:19:52.162Z] 2348.00 IOPS, 146.75 MiB/s 00:22:27.959 Latency(us) 00:22:27.959 [2024-11-20T07:19:52.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.959 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x0 length 0xbd0b 00:22:27.959 Nvme0n1 : 5.57 132.13 8.26 0.00 0.00 937677.61 17725.93 886795.70 00:22:27.959 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0xbd0b length 0xbd0b 00:22:27.959 Nvme0n1 : 5.60 131.35 8.21 0.00 0.00 951430.58 30458.64 870817.40 00:22:27.959 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x0 length 0xa000 00:22:27.959 Nvme1n1 : 5.71 134.49 8.41 0.00 0.00 896314.35 111348.78 794920.47 00:22:27.959 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0xa000 length 0xa000 00:22:27.959 Nvme1n1 : 5.72 129.88 8.12 0.00 0.00 924199.32 49432.87 850844.53 00:22:27.959 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x0 length 0x8000 00:22:27.959 Nvme2n1 : 5.71 124.98 7.81 0.00 0.00 930638.12 119337.94 1589840.94 00:22:27.959 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x8000 length 0x8000 00:22:27.959 Nvme2n1 : 5.72 128.80 8.05 0.00 0.00 905042.46 49682.53 894784.85 00:22:27.959 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x0 length 0x8000 00:22:27.959 Nvme2n2 : 5.81 135.86 8.49 0.00 0.00 843561.69 39696.09 1438047.09 00:22:27.959 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x8000 length 0x8000 00:22:27.959 Nvme2n2 : 5.72 134.15 8.38 0.00 0.00 861238.61 109351.50 926741.46 00:22:27.959 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x0 length 0x8000 00:22:27.959 Nvme2n3 : 5.82 140.33 8.77 0.00 0.00 797086.14 23218.47 1645765.00 00:22:27.959 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x8000 length 0x8000 00:22:27.959 Nvme2n3 : 5.80 143.36 8.96 0.00 0.00 790018.18 10173.68 930736.03 00:22:27.959 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:27.959 Verification LBA range: start 0x0 length 0x2000 00:22:27.960 Nvme3n1 : 5.83 150.84 9.43 0.00 0.00 724727.27 1607.19 1677721.60 00:22:27.960 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:27.960 Verification LBA range: start 0x2000 length 0x2000 00:22:27.960 Nvme3n1 : 5.82 149.84 9.36 0.00 0.00 737237.77 9799.19 930736.03 00:22:27.960 [2024-11-20T07:19:52.163Z] =================================================================================================================== 00:22:27.960 [2024-11-20T07:19:52.163Z] Total : 1636.01 102.25 0.00 0.00 853193.27 1607.19 1677721.60 00:22:30.490 00:22:30.490 real 0m9.270s 00:22:30.490 user 0m17.201s 00:22:30.490 sys 0m0.374s 00:22:30.490 07:19:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.490 07:19:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:30.490 ************************************ 00:22:30.490 END TEST bdev_verify_big_io 00:22:30.490 ************************************ 00:22:30.490 07:19:54 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:30.490 07:19:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:30.490 07:19:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.490 07:19:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:22:30.490 ************************************ 00:22:30.490 START TEST bdev_write_zeroes 00:22:30.490 ************************************ 00:22:30.491 07:19:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:30.491 [2024-11-20 07:19:54.309518] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:30.491 [2024-11-20 07:19:54.310301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62422 ] 00:22:30.491 [2024-11-20 07:19:54.491100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.491 [2024-11-20 07:19:54.632585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.424 Running I/O for 1 seconds... 00:22:32.357 48704.00 IOPS, 190.25 MiB/s 00:22:32.357 Latency(us) 00:22:32.358 [2024-11-20T07:19:56.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.358 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:32.358 Nvme0n1 : 1.03 8068.60 31.52 0.00 0.00 15823.92 11921.31 29210.33 00:22:32.358 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:32.358 Nvme1n1 : 1.03 8055.80 31.47 0.00 0.00 15824.83 12358.22 28336.52 00:22:32.358 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:32.358 Nvme2n1 : 1.03 8043.64 31.42 0.00 0.00 15784.67 12046.14 27712.37 00:22:32.358 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:32.358 Nvme2n2 : 1.04 8031.50 31.37 0.00 0.00 15719.83 9549.53 26963.38 00:22:32.358 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:32.358 Nvme2n3 : 1.04 8019.33 31.33 0.00 0.00 15706.84 9362.29 26963.38 00:22:32.358 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:32.358 Nvme3n1 : 1.04 7945.59 31.04 0.00 0.00 15803.09 11734.06 29335.16 00:22:32.358 [2024-11-20T07:19:56.561Z] =================================================================================================================== 00:22:32.358 [2024-11-20T07:19:56.561Z] Total : 48164.47 188.14 0.00 0.00 15777.17 9362.29 29335.16 00:22:33.735 00:22:33.736 real 0m3.587s 00:22:33.736 user 0m3.178s 00:22:33.736 sys 0m0.287s 00:22:33.736 07:19:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.736 07:19:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:33.736 ************************************ 00:22:33.736 END TEST bdev_write_zeroes 00:22:33.736 ************************************ 00:22:33.736 07:19:57 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:33.736 07:19:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:33.736 07:19:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.736 07:19:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:22:33.736 ************************************ 00:22:33.736 START TEST bdev_json_nonenclosed 00:22:33.736 ************************************ 00:22:33.736 07:19:57 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:33.995 [2024-11-20 07:19:57.985926] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:33.995 [2024-11-20 07:19:57.986120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62486 ] 00:22:33.995 [2024-11-20 07:19:58.182256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.254 [2024-11-20 07:19:58.308187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.254 [2024-11-20 07:19:58.308291] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:34.254 [2024-11-20 07:19:58.308316] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:34.254 [2024-11-20 07:19:58.308330] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:34.513 00:22:34.513 real 0m0.745s 00:22:34.513 user 0m0.472s 00:22:34.513 sys 0m0.166s 00:22:34.513 07:19:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.513 07:19:58 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:34.513 ************************************ 00:22:34.513 END TEST bdev_json_nonenclosed 00:22:34.513 ************************************ 00:22:34.513 07:19:58 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:34.513 07:19:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:34.513 07:19:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.513 07:19:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:22:34.513 ************************************ 00:22:34.513 START TEST bdev_json_nonarray 00:22:34.513 ************************************ 00:22:34.513 07:19:58 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:34.772 [2024-11-20 07:19:58.780133] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:34.772 [2024-11-20 07:19:58.780313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62517 ] 00:22:35.030 [2024-11-20 07:19:58.978714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.030 [2024-11-20 07:19:59.116043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.030 [2024-11-20 07:19:59.116159] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:35.030 [2024-11-20 07:19:59.116185] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:35.030 [2024-11-20 07:19:59.116201] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:35.287 00:22:35.287 real 0m0.736s 00:22:35.287 user 0m0.473s 00:22:35.287 sys 0m0.157s 00:22:35.287 07:19:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.287 ************************************ 00:22:35.287 07:19:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:35.287 END TEST bdev_json_nonarray 00:22:35.287 ************************************ 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:22:35.287 07:19:59 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:22:35.287 00:22:35.287 real 0m49.094s 00:22:35.287 user 1m12.928s 00:22:35.287 sys 0m8.868s 00:22:35.287 07:19:59 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.287 07:19:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:22:35.287 ************************************ 00:22:35.287 END TEST blockdev_nvme 00:22:35.287 ************************************ 00:22:35.614 07:19:59 -- spdk/autotest.sh@209 -- # uname -s 00:22:35.614 07:19:59 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:22:35.614 07:19:59 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:22:35.614 07:19:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.614 07:19:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.614 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:22:35.614 ************************************ 00:22:35.614 START TEST blockdev_nvme_gpt 00:22:35.614 ************************************ 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:22:35.614 * Looking for test storage... 00:22:35.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.614 07:19:59 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.614 --rc genhtml_branch_coverage=1 00:22:35.614 --rc genhtml_function_coverage=1 00:22:35.614 --rc genhtml_legend=1 00:22:35.614 --rc geninfo_all_blocks=1 00:22:35.614 --rc geninfo_unexecuted_blocks=1 00:22:35.614 00:22:35.614 ' 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.614 --rc genhtml_branch_coverage=1 00:22:35.614 --rc genhtml_function_coverage=1 00:22:35.614 --rc genhtml_legend=1 00:22:35.614 --rc geninfo_all_blocks=1 00:22:35.614 --rc geninfo_unexecuted_blocks=1 00:22:35.614 00:22:35.614 ' 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.614 --rc genhtml_branch_coverage=1 00:22:35.614 --rc genhtml_function_coverage=1 00:22:35.614 --rc genhtml_legend=1 00:22:35.614 --rc geninfo_all_blocks=1 00:22:35.614 --rc geninfo_unexecuted_blocks=1 00:22:35.614 00:22:35.614 ' 00:22:35.614 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.614 --rc genhtml_branch_coverage=1 00:22:35.614 --rc genhtml_function_coverage=1 00:22:35.614 --rc genhtml_legend=1 00:22:35.614 --rc geninfo_all_blocks=1 00:22:35.614 --rc geninfo_unexecuted_blocks=1 00:22:35.615 00:22:35.615 ' 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62601 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62601 00:22:35.615 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62601 ']' 00:22:35.615 07:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:35.615 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.615 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.615 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.615 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.615 07:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:35.874 [2024-11-20 07:19:59.914956] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:35.874 [2024-11-20 07:19:59.915161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62601 ] 00:22:36.132 [2024-11-20 07:20:00.125207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.132 [2024-11-20 07:20:00.322782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.507 07:20:01 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.507 07:20:01 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:22:37.507 07:20:01 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:37.507 07:20:01 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:22:37.507 07:20:01 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:37.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:38.023 Waiting for block devices as requested 00:22:38.282 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:38.282 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:38.282 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:22:38.541 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:22:43.810 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:22:43.811 BYT; 00:22:43.811 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:22:43.811 BYT; 00:22:43.811 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:22:43.811 07:20:07 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:22:43.811 07:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:22:44.747 The operation has completed successfully. 00:22:44.747 07:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:22:45.680 The operation has completed successfully. 00:22:45.680 07:20:09 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:46.247 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:46.813 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:46.813 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:47.071 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:47.071 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:47.071 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:22:47.071 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.071 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:47.071 [] 00:22:47.071 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.071 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:22:47.072 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:22:47.072 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:22:47.072 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:47.330 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:22:47.330 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.330 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.589 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.589 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:22:47.589 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.589 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.589 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.589 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:47.589 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:47.589 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.589 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:47.849 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.849 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:47.849 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:47.851 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "352fc769-8db2-48c9-8880-02f7d5b0449b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "352fc769-8db2-48c9-8880-02f7d5b0449b",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "cfb67945-6903-483c-94fc-fa31c03b1b51"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cfb67945-6903-483c-94fc-fa31c03b1b51",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5027ca46-f5f1-44ce-b765-841cb27b5992"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5027ca46-f5f1-44ce-b765-841cb27b5992",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2a3692bb-3fdb-4dc9-b79e-044fcad74625"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a3692bb-3fdb-4dc9-b79e-044fcad74625",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "fe8392ac-9f68-4367-b9cf-11d1cd11f6fa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fe8392ac-9f68-4367-b9cf-11d1cd11f6fa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:22:47.851 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:47.851 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:22:47.851 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:47.851 07:20:11 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62601 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62601 ']' 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62601 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62601 00:22:47.851 killing process with pid 62601 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62601' 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62601 00:22:47.851 07:20:11 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62601 00:22:51.136 07:20:14 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:51.136 07:20:14 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:22:51.136 07:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:51.136 07:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.136 07:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:51.136 ************************************ 00:22:51.136 START TEST bdev_hello_world 00:22:51.136 ************************************ 00:22:51.136 07:20:14 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:22:51.136 [2024-11-20 07:20:14.906852] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:51.136 [2024-11-20 07:20:14.907104] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63250 ] 00:22:51.136 [2024-11-20 07:20:15.107437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.136 [2024-11-20 07:20:15.261878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.071 [2024-11-20 07:20:16.005415] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:52.071 [2024-11-20 07:20:16.005486] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:22:52.071 [2024-11-20 07:20:16.005528] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:52.071 [2024-11-20 07:20:16.009436] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:52.071 [2024-11-20 07:20:16.009933] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:52.071 [2024-11-20 07:20:16.009970] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:52.071 [2024-11-20 07:20:16.010284] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:52.071 00:22:52.071 [2024-11-20 07:20:16.010358] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:53.446 00:22:53.446 real 0m2.547s 00:22:53.446 user 0m2.118s 00:22:53.446 sys 0m0.314s 00:22:53.446 ************************************ 00:22:53.446 END TEST bdev_hello_world 00:22:53.446 ************************************ 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:53.446 07:20:17 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:53.446 07:20:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.446 07:20:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.446 07:20:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:53.446 ************************************ 00:22:53.446 START TEST bdev_bounds 00:22:53.446 ************************************ 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63298 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:53.446 Process bdevio pid: 63298 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63298' 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63298 00:22:53.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63298 ']' 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.446 07:20:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:53.446 [2024-11-20 07:20:17.469115] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:53.446 [2024-11-20 07:20:17.469545] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63298 ] 00:22:53.704 [2024-11-20 07:20:17.654147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:53.704 [2024-11-20 07:20:17.800536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.704 [2024-11-20 07:20:17.800678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.704 [2024-11-20 07:20:17.800681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.641 07:20:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.641 07:20:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:54.641 07:20:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:54.641 I/O targets: 00:22:54.641 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:22:54.641 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:22:54.641 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:22:54.641 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:54.641 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:54.641 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:22:54.641 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:22:54.641 00:22:54.641 00:22:54.641 CUnit - A unit testing framework for C - Version 2.1-3 00:22:54.641 http://cunit.sourceforge.net/ 00:22:54.641 00:22:54.641 00:22:54.641 Suite: bdevio tests on: Nvme3n1 00:22:54.641 Test: blockdev write read block ...passed 00:22:54.641 Test: blockdev write zeroes read block ...passed 00:22:54.641 Test: blockdev write zeroes read no split ...passed 00:22:54.641 Test: blockdev write zeroes read split ...passed 00:22:54.641 Test: blockdev write zeroes read split partial ...passed 00:22:54.641 Test: blockdev reset ...[2024-11-20 07:20:18.810917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:22:54.641 passed 00:22:54.641 Test: blockdev write read 8 blocks ...[2024-11-20 07:20:18.815772] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:22:54.641 passed 00:22:54.641 Test: blockdev write read size > 128k ...passed 00:22:54.641 Test: blockdev write read invalid size ...passed 00:22:54.641 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:54.641 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:54.641 Test: blockdev write read max offset ...passed 00:22:54.641 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:54.641 Test: blockdev writev readv 8 blocks ...passed 00:22:54.641 Test: blockdev writev readv 30 x 1block ...passed 00:22:54.641 Test: blockdev writev readv block ...passed 00:22:54.641 Test: blockdev writev readv size > 128k ...passed 00:22:54.641 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:54.641 Test: blockdev comparev and writev ...[2024-11-20 07:20:18.826692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7804000 len:0x1000 00:22:54.641 [2024-11-20 07:20:18.826787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:22:54.641 passed 00:22:54.641 Test: blockdev nvme passthru rw ...passed 00:22:54.641 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:20:18.827727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:22:54.641 [2024-11-20 07:20:18.827790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:22:54.641 passed 00:22:54.641 Test: blockdev nvme admin passthru ...passed 00:22:54.641 Test: blockdev copy ...passed 00:22:54.641 Suite: bdevio tests on: Nvme2n3 00:22:54.641 Test: blockdev write read block ...passed 00:22:54.641 Test: blockdev write zeroes read block ...passed 00:22:54.900 Test: blockdev write zeroes read no split ...passed 00:22:54.900 Test: blockdev write zeroes read split ...passed 00:22:54.900 Test: blockdev write zeroes read split partial ...passed 00:22:54.900 Test: blockdev reset ...[2024-11-20 07:20:18.916936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:22:54.900 passed 00:22:54.900 Test: blockdev write read 8 blocks ...[2024-11-20 07:20:18.922001] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:22:54.900 passed 00:22:54.900 Test: blockdev write read size > 128k ...passed 00:22:54.900 Test: blockdev write read invalid size ...passed 00:22:54.900 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:54.900 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:54.900 Test: blockdev write read max offset ...passed 00:22:54.900 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:54.900 Test: blockdev writev readv 8 blocks ...passed 00:22:54.900 Test: blockdev writev readv 30 x 1block ...passed 00:22:54.900 Test: blockdev writev readv block ...passed 00:22:54.900 Test: blockdev writev readv size > 128k ...passed 00:22:54.900 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:54.900 Test: blockdev comparev and writev ...[2024-11-20 07:20:18.930984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:22:54.900 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b7802000 len:0x1000 00:22:54.900 [2024-11-20 07:20:18.931239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:22:54.900 passed 00:22:54.900 Test: blockdev nvme passthru vendor specific ...passed 00:22:54.900 Test: blockdev nvme admin passthru ...[2024-11-20 07:20:18.932921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:22:54.900 [2024-11-20 07:20:18.933002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:22:54.900 passed 00:22:54.900 Test: blockdev copy ...passed 00:22:54.900 Suite: bdevio tests on: Nvme2n2 00:22:54.900 Test: blockdev write read block ...passed 00:22:54.900 Test: blockdev write zeroes read block ...passed 00:22:54.900 Test: blockdev write zeroes read no split ...passed 00:22:54.900 Test: blockdev write zeroes read split ...passed 00:22:54.900 Test: blockdev write zeroes read split partial ...passed 00:22:54.900 Test: blockdev reset ...[2024-11-20 07:20:19.025515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:22:54.900 passed 00:22:54.900 Test: blockdev write read 8 blocks ...[2024-11-20 07:20:19.030729] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:22:54.900 passed 00:22:54.900 Test: blockdev write read size > 128k ...passed 00:22:54.900 Test: blockdev write read invalid size ...passed 00:22:54.900 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:54.900 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:54.900 Test: blockdev write read max offset ...passed 00:22:54.900 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:54.901 Test: blockdev writev readv 8 blocks ...passed 00:22:54.901 Test: blockdev writev readv 30 x 1block ...passed 00:22:54.901 Test: blockdev writev readv block ...passed 00:22:54.901 Test: blockdev writev readv size > 128k ...passed 00:22:54.901 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:54.901 Test: blockdev comparev and writev ...[2024-11-20 07:20:19.040231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca638000 len:0x1000 00:22:54.901 [2024-11-20 07:20:19.040322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:22:54.901 passed 00:22:54.901 Test: blockdev nvme passthru rw ...passed 00:22:54.901 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:20:19.041102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:22:54.901 [2024-11-20 07:20:19.041160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:22:54.901 passed 00:22:54.901 Test: blockdev nvme admin passthru ...passed 00:22:54.901 Test: blockdev copy ...passed 00:22:54.901 Suite: bdevio tests on: Nvme2n1 00:22:54.901 Test: blockdev write read block ...passed 00:22:54.901 Test: blockdev write zeroes read block ...passed 00:22:54.901 Test: blockdev write zeroes read no split ...passed 00:22:54.901 Test: blockdev write zeroes read split ...passed 00:22:55.174 Test: blockdev write zeroes read split partial ...passed 00:22:55.174 Test: blockdev reset ...[2024-11-20 07:20:19.134609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:22:55.174 passed 00:22:55.174 Test: blockdev write read 8 blocks ...[2024-11-20 07:20:19.141013] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:22:55.174 passed 00:22:55.174 Test: blockdev write read size > 128k ...passed 00:22:55.174 Test: blockdev write read invalid size ...passed 00:22:55.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:55.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:55.174 Test: blockdev write read max offset ...passed 00:22:55.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:55.174 Test: blockdev writev readv 8 blocks ...passed 00:22:55.174 Test: blockdev writev readv 30 x 1block ...passed 00:22:55.174 Test: blockdev writev readv block ...passed 00:22:55.174 Test: blockdev writev readv size > 128k ...passed 00:22:55.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:55.174 Test: blockdev comparev and writev ...[2024-11-20 07:20:19.150587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:22:55.174 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ca634000 len:0x1000 00:22:55.174 [2024-11-20 07:20:19.150805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:22:55.174 passed 00:22:55.174 Test: blockdev nvme passthru vendor specific ...passed 00:22:55.174 Test: blockdev nvme admin passthru ...[2024-11-20 07:20:19.151714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:22:55.174 [2024-11-20 07:20:19.151805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:22:55.174 passed 00:22:55.174 Test: blockdev copy ...passed 00:22:55.174 Suite: bdevio tests on: Nvme1n1p2 00:22:55.174 Test: blockdev write read block ...passed 00:22:55.174 Test: blockdev write zeroes read block ...passed 00:22:55.174 Test: blockdev write zeroes read no split ...passed 00:22:55.174 Test: blockdev write zeroes read split ...passed 00:22:55.174 Test: blockdev write zeroes read split partial ...passed 00:22:55.174 Test: blockdev reset ...[2024-11-20 07:20:19.241076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:22:55.174 [2024-11-20 07:20:19.245806] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:22:55.174 passed 00:22:55.174 Test: blockdev write read 8 blocks ...passed 00:22:55.174 Test: blockdev write read size > 128k ...passed 00:22:55.174 Test: blockdev write read invalid size ...passed 00:22:55.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:55.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:55.174 Test: blockdev write read max offset ...passed 00:22:55.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:55.174 Test: blockdev writev readv 8 blocks ...passed 00:22:55.174 Test: blockdev writev readv 30 x 1block ...passed 00:22:55.174 Test: blockdev writev readv block ...passed 00:22:55.174 Test: blockdev writev readv size > 128k ...passed 00:22:55.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:55.174 Test: blockdev comparev and writev ...[2024-11-20 07:20:19.256303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:22:55.174 Test: blockdev nvme passthru rw ...passed 00:22:55.174 Test: blockdev nvme passthru vendor specific ...passed 00:22:55.174 Test: blockdev nvme admin passthru ...passed 00:22:55.174 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2ca630000 len:0x1000 00:22:55.174 [2024-11-20 07:20:19.256594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:22:55.174 passed 00:22:55.174 Suite: bdevio tests on: Nvme1n1p1 00:22:55.174 Test: blockdev write read block ...passed 00:22:55.174 Test: blockdev write zeroes read block ...passed 00:22:55.174 Test: blockdev write zeroes read no split ...passed 00:22:55.174 Test: blockdev write zeroes read split ...passed 00:22:55.174 Test: blockdev write zeroes read split partial ...passed 00:22:55.174 Test: blockdev reset ...[2024-11-20 07:20:19.347286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:22:55.174 [2024-11-20 07:20:19.352485] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:22:55.174 passed 00:22:55.174 Test: blockdev write read 8 blocks ...passed 00:22:55.174 Test: blockdev write read size > 128k ...passed 00:22:55.174 Test: blockdev write read invalid size ...passed 00:22:55.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:55.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:55.174 Test: blockdev write read max offset ...passed 00:22:55.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:55.174 Test: blockdev writev readv 8 blocks ...passed 00:22:55.174 Test: blockdev writev readv 30 x 1block ...passed 00:22:55.174 Test: blockdev writev readv block ...passed 00:22:55.174 Test: blockdev writev readv size > 128k ...passed 00:22:55.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:55.174 Test: blockdev comparev and writev ...[2024-11-20 07:20:19.362423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:22:55.174 Test: blockdev nvme passthru rw ...passed 00:22:55.174 Test: blockdev nvme passthru vendor specific ...passed 00:22:55.174 Test: blockdev nvme admin passthru ...passed 00:22:55.174 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2b820e000 len:0x1000 00:22:55.174 [2024-11-20 07:20:19.362642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:22:55.174 passed 00:22:55.174 Suite: bdevio tests on: Nvme0n1 00:22:55.174 Test: blockdev write read block ...passed 00:22:55.174 Test: blockdev write zeroes read block ...passed 00:22:55.174 Test: blockdev write zeroes read no split ...passed 00:22:55.434 Test: blockdev write zeroes read split ...passed 00:22:55.434 Test: blockdev write zeroes read split partial ...passed 00:22:55.434 Test: blockdev reset ...[2024-11-20 07:20:19.450791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:22:55.434 [2024-11-20 07:20:19.455605] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:22:55.434 Test: blockdev write read 8 blocks ...uccessful. 00:22:55.434 passed 00:22:55.434 Test: blockdev write read size > 128k ...passed 00:22:55.434 Test: blockdev write read invalid size ...passed 00:22:55.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:55.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:55.434 Test: blockdev write read max offset ...passed 00:22:55.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:55.434 Test: blockdev writev readv 8 blocks ...passed 00:22:55.434 Test: blockdev writev readv 30 x 1block ...passed 00:22:55.434 Test: blockdev writev readv block ...passed 00:22:55.434 Test: blockdev writev readv size > 128k ...passed 00:22:55.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:55.434 Test: blockdev comparev and writev ...passed 00:22:55.434 Test: blockdev nvme passthru rw ...[2024-11-20 07:20:19.464062] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:22:55.434 separate metadata which is not supported yet. 00:22:55.434 passed 00:22:55.434 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:20:19.464714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:22:55.434 [2024-11-20 07:20:19.464791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:22:55.434 passed 00:22:55.434 Test: blockdev nvme admin passthru ...passed 00:22:55.434 Test: blockdev copy ...passed 00:22:55.434 00:22:55.434 Run Summary: Type Total Ran Passed Failed Inactive 00:22:55.434 suites 7 7 n/a 0 0 00:22:55.434 tests 161 161 161 0 0 00:22:55.434 asserts 1025 1025 1025 0 n/a 00:22:55.434 00:22:55.434 Elapsed time = 2.075 seconds 00:22:55.434 0 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63298 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63298 ']' 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63298 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63298 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63298' 00:22:55.434 killing process with pid 63298 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63298 00:22:55.434 07:20:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63298 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:56.845 00:22:56.845 real 0m3.533s 00:22:56.845 user 0m9.243s 00:22:56.845 sys 0m0.468s 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:56.845 ************************************ 00:22:56.845 END TEST bdev_bounds 00:22:56.845 ************************************ 00:22:56.845 07:20:20 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:22:56.845 07:20:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:56.845 07:20:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:56.845 07:20:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:56.845 ************************************ 00:22:56.845 START TEST bdev_nbd 00:22:56.845 ************************************ 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63364 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63364 /var/tmp/spdk-nbd.sock 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63364 ']' 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.845 07:20:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:57.123 [2024-11-20 07:20:21.123314] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:22:57.123 [2024-11-20 07:20:21.123747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.123 [2024-11-20 07:20:21.317380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.382 [2024-11-20 07:20:21.485364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:22:58.317 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:58.576 1+0 records in 00:22:58.576 1+0 records out 00:22:58.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583735 s, 7.0 MB/s 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:22:58.576 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:58.835 1+0 records in 00:22:58.835 1+0 records out 00:22:58.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062259 s, 6.6 MB/s 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:22:58.835 07:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:59.094 1+0 records in 00:22:59.094 1+0 records out 00:22:59.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609341 s, 6.7 MB/s 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:22:59.094 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:59.663 1+0 records in 00:22:59.663 1+0 records out 00:22:59.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694491 s, 5.9 MB/s 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:22:59.663 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:59.921 1+0 records in 00:22:59.921 1+0 records out 00:22:59.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075023 s, 5.5 MB/s 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:22:59.921 07:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:00.180 1+0 records in 00:23:00.180 1+0 records out 00:23:00.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579954 s, 7.1 MB/s 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:23:00.180 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:00.440 1+0 records in 00:23:00.440 1+0 records out 00:23:00.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702179 s, 5.8 MB/s 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:23:00.440 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:00.699 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:00.699 { 00:23:00.699 "nbd_device": "/dev/nbd0", 00:23:00.699 "bdev_name": "Nvme0n1" 00:23:00.699 }, 00:23:00.699 { 00:23:00.699 "nbd_device": "/dev/nbd1", 00:23:00.699 "bdev_name": "Nvme1n1p1" 00:23:00.699 }, 00:23:00.699 { 00:23:00.699 "nbd_device": "/dev/nbd2", 00:23:00.699 "bdev_name": "Nvme1n1p2" 00:23:00.699 }, 00:23:00.699 { 00:23:00.699 "nbd_device": "/dev/nbd3", 00:23:00.699 "bdev_name": "Nvme2n1" 00:23:00.699 }, 00:23:00.699 { 00:23:00.699 "nbd_device": "/dev/nbd4", 00:23:00.699 "bdev_name": "Nvme2n2" 00:23:00.699 }, 00:23:00.699 { 00:23:00.699 "nbd_device": "/dev/nbd5", 00:23:00.699 "bdev_name": "Nvme2n3" 00:23:00.699 }, 00:23:00.699 { 00:23:00.699 "nbd_device": "/dev/nbd6", 00:23:00.699 "bdev_name": "Nvme3n1" 00:23:00.699 } 00:23:00.699 ]' 00:23:00.699 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:00.957 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:00.957 { 00:23:00.957 "nbd_device": "/dev/nbd0", 00:23:00.957 "bdev_name": "Nvme0n1" 00:23:00.957 }, 00:23:00.958 { 00:23:00.958 "nbd_device": "/dev/nbd1", 00:23:00.958 "bdev_name": "Nvme1n1p1" 00:23:00.958 }, 00:23:00.958 { 00:23:00.958 "nbd_device": "/dev/nbd2", 00:23:00.958 "bdev_name": "Nvme1n1p2" 00:23:00.958 }, 00:23:00.958 { 00:23:00.958 "nbd_device": "/dev/nbd3", 00:23:00.958 "bdev_name": "Nvme2n1" 00:23:00.958 }, 00:23:00.958 { 00:23:00.958 "nbd_device": "/dev/nbd4", 00:23:00.958 "bdev_name": "Nvme2n2" 00:23:00.958 }, 00:23:00.958 { 00:23:00.958 "nbd_device": "/dev/nbd5", 00:23:00.958 "bdev_name": "Nvme2n3" 00:23:00.958 }, 00:23:00.958 { 00:23:00.958 "nbd_device": "/dev/nbd6", 00:23:00.958 "bdev_name": "Nvme3n1" 00:23:00.958 } 00:23:00.958 ]' 00:23:00.958 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:00.958 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:23:00.958 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:00.958 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:23:00.958 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:00.958 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:00.958 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:00.958 07:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:01.217 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:01.475 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:01.733 07:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:23:02.300 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.301 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.559 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.818 07:20:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:03.078 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:03.337 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:23:03.338 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:03.338 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:23:03.338 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:03.338 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:03.338 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:03.338 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:23:03.338 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:23:03.603 /dev/nbd0 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.603 1+0 records in 00:23:03.603 1+0 records out 00:23:03.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438327 s, 9.3 MB/s 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:23:03.603 07:20:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:23:04.170 /dev/nbd1 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:04.170 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.171 1+0 records in 00:23:04.171 1+0 records out 00:23:04.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570972 s, 7.2 MB/s 00:23:04.171 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.171 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:04.171 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.171 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:04.171 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:04.171 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.171 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:23:04.171 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:23:04.430 /dev/nbd10 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:04.430 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.430 1+0 records in 00:23:04.430 1+0 records out 00:23:04.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662229 s, 6.2 MB/s 00:23:04.431 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.431 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:04.431 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.431 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:04.431 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:04.431 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.431 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:23:04.431 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:23:04.692 /dev/nbd11 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.692 1+0 records in 00:23:04.692 1+0 records out 00:23:04.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783878 s, 5.2 MB/s 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:23:04.692 07:20:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:23:04.950 /dev/nbd12 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.950 1+0 records in 00:23:04.950 1+0 records out 00:23:04.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100199 s, 4.1 MB/s 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:23:04.950 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:23:05.209 /dev/nbd13 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.209 1+0 records in 00:23:05.209 1+0 records out 00:23:05.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831363 s, 4.9 MB/s 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:23:05.209 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:23:05.467 /dev/nbd14 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.467 1+0 records in 00:23:05.467 1+0 records out 00:23:05.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758397 s, 5.4 MB/s 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:05.467 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:06.033 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:06.033 { 00:23:06.033 "nbd_device": "/dev/nbd0", 00:23:06.033 "bdev_name": "Nvme0n1" 00:23:06.033 }, 00:23:06.033 { 00:23:06.033 "nbd_device": "/dev/nbd1", 00:23:06.033 "bdev_name": "Nvme1n1p1" 00:23:06.033 }, 00:23:06.033 { 00:23:06.033 "nbd_device": "/dev/nbd10", 00:23:06.033 "bdev_name": "Nvme1n1p2" 00:23:06.033 }, 00:23:06.033 { 00:23:06.033 "nbd_device": "/dev/nbd11", 00:23:06.033 "bdev_name": "Nvme2n1" 00:23:06.033 }, 00:23:06.033 { 00:23:06.033 "nbd_device": "/dev/nbd12", 00:23:06.033 "bdev_name": "Nvme2n2" 00:23:06.033 }, 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd13", 00:23:06.034 "bdev_name": "Nvme2n3" 00:23:06.034 }, 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd14", 00:23:06.034 "bdev_name": "Nvme3n1" 00:23:06.034 } 00:23:06.034 ]' 00:23:06.034 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:06.034 07:20:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd0", 00:23:06.034 "bdev_name": "Nvme0n1" 00:23:06.034 }, 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd1", 00:23:06.034 "bdev_name": "Nvme1n1p1" 00:23:06.034 }, 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd10", 00:23:06.034 "bdev_name": "Nvme1n1p2" 00:23:06.034 }, 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd11", 00:23:06.034 "bdev_name": "Nvme2n1" 00:23:06.034 }, 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd12", 00:23:06.034 "bdev_name": "Nvme2n2" 00:23:06.034 }, 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd13", 00:23:06.034 "bdev_name": "Nvme2n3" 00:23:06.034 }, 00:23:06.034 { 00:23:06.034 "nbd_device": "/dev/nbd14", 00:23:06.034 "bdev_name": "Nvme3n1" 00:23:06.034 } 00:23:06.034 ]' 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:06.034 /dev/nbd1 00:23:06.034 /dev/nbd10 00:23:06.034 /dev/nbd11 00:23:06.034 /dev/nbd12 00:23:06.034 /dev/nbd13 00:23:06.034 /dev/nbd14' 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:06.034 /dev/nbd1 00:23:06.034 /dev/nbd10 00:23:06.034 /dev/nbd11 00:23:06.034 /dev/nbd12 00:23:06.034 /dev/nbd13 00:23:06.034 /dev/nbd14' 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:06.034 256+0 records in 00:23:06.034 256+0 records out 00:23:06.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00710317 s, 148 MB/s 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:06.034 256+0 records in 00:23:06.034 256+0 records out 00:23:06.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142196 s, 7.4 MB/s 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.034 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:06.293 256+0 records in 00:23:06.293 256+0 records out 00:23:06.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152024 s, 6.9 MB/s 00:23:06.293 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.293 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:23:06.293 256+0 records in 00:23:06.293 256+0 records out 00:23:06.293 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147933 s, 7.1 MB/s 00:23:06.293 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.293 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:23:06.551 256+0 records in 00:23:06.551 256+0 records out 00:23:06.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151206 s, 6.9 MB/s 00:23:06.551 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.551 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:23:06.809 256+0 records in 00:23:06.809 256+0 records out 00:23:06.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149199 s, 7.0 MB/s 00:23:06.809 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.809 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:23:06.809 256+0 records in 00:23:06.809 256+0 records out 00:23:06.809 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150433 s, 7.0 MB/s 00:23:06.809 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:06.809 07:20:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:23:07.068 256+0 records in 00:23:07.068 256+0 records out 00:23:07.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158766 s, 6.6 MB/s 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.068 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:07.327 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:07.585 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:07.585 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:07.585 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.585 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.585 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:07.586 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.586 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.586 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.586 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:07.845 07:20:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.105 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.364 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.622 07:20:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.881 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:23:09.139 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:23:09.139 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:23:09.139 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:23:09.139 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.139 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.139 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:23:09.139 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:09.139 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:09.460 07:20:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:10.027 malloc_lvol_verify 00:23:10.027 07:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:10.284 e061e5dc-4006-420c-ac2b-aa84e5be17d8 00:23:10.543 07:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:10.799 50c914a5-d591-46c1-a4ad-bd81a40b2c4d 00:23:10.799 07:20:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:11.057 /dev/nbd0 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:11.057 mke2fs 1.47.0 (5-Feb-2023) 00:23:11.057 Discarding device blocks: 0/4096 done 00:23:11.057 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:11.057 00:23:11.057 Allocating group tables: 0/1 done 00:23:11.057 Writing inode tables: 0/1 done 00:23:11.057 Creating journal (1024 blocks): done 00:23:11.057 Writing superblocks and filesystem accounting information: 0/1 done 00:23:11.057 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.057 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63364 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63364 ']' 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63364 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63364 00:23:11.315 killing process with pid 63364 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63364' 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63364 00:23:11.315 07:20:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63364 00:23:12.686 07:20:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:12.686 00:23:12.686 real 0m15.897s 00:23:12.686 user 0m21.465s 00:23:12.686 sys 0m6.272s 00:23:12.686 07:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.686 07:20:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:12.686 ************************************ 00:23:12.686 END TEST bdev_nbd 00:23:12.686 ************************************ 00:23:12.944 07:20:36 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:23:12.944 07:20:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:23:12.944 07:20:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:23:12.944 skipping fio tests on NVMe due to multi-ns failures. 00:23:12.944 07:20:36 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:23:12.944 07:20:36 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:12.944 07:20:36 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:12.944 07:20:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:12.944 07:20:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.944 07:20:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:23:12.944 ************************************ 00:23:12.944 START TEST bdev_verify 00:23:12.944 ************************************ 00:23:12.944 07:20:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:12.944 [2024-11-20 07:20:37.087301] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:12.944 [2024-11-20 07:20:37.087532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63827 ] 00:23:13.202 [2024-11-20 07:20:37.291841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:13.460 [2024-11-20 07:20:37.450634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.460 [2024-11-20 07:20:37.450637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.396 Running I/O for 5 seconds... 00:23:16.795 17984.00 IOPS, 70.25 MiB/s [2024-11-20T07:20:41.935Z] 18176.00 IOPS, 71.00 MiB/s [2024-11-20T07:20:42.869Z] 18432.00 IOPS, 72.00 MiB/s [2024-11-20T07:20:43.803Z] 18128.00 IOPS, 70.81 MiB/s [2024-11-20T07:20:43.803Z] 18368.00 IOPS, 71.75 MiB/s 00:23:19.600 Latency(us) 00:23:19.600 [2024-11-20T07:20:43.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.600 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x0 length 0xbd0bd 00:23:19.600 Nvme0n1 : 5.08 1346.75 5.26 0.00 0.00 94517.28 15166.90 91375.91 00:23:19.600 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:23:19.600 Nvme0n1 : 5.10 1228.71 4.80 0.00 0.00 103438.18 16352.79 141807.42 00:23:19.600 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x0 length 0x4ff80 00:23:19.600 Nvme1n1p1 : 5.09 1346.31 5.26 0.00 0.00 94381.14 14293.09 93373.20 00:23:19.600 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x4ff80 length 0x4ff80 00:23:19.600 Nvme1n1p1 : 5.11 1227.82 4.80 0.00 0.00 103283.99 18350.08 139810.13 00:23:19.600 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x0 length 0x4ff7f 00:23:19.600 Nvme1n1p2 : 5.09 1345.81 5.26 0.00 0.00 94236.81 13419.28 90377.26 00:23:19.600 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:23:19.600 Nvme1n1p2 : 5.11 1227.38 4.79 0.00 0.00 103111.87 18350.08 143804.71 00:23:19.600 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x0 length 0x80000 00:23:19.600 Nvme2n1 : 5.10 1354.20 5.29 0.00 0.00 93825.49 11734.06 86882.01 00:23:19.600 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x80000 length 0x80000 00:23:19.600 Nvme2n1 : 5.11 1227.05 4.79 0.00 0.00 102914.83 18225.25 146800.64 00:23:19.600 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x0 length 0x80000 00:23:19.600 Nvme2n2 : 5.11 1353.66 5.29 0.00 0.00 93649.94 11609.23 86882.01 00:23:19.600 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x80000 length 0x80000 00:23:19.600 Nvme2n2 : 5.11 1226.71 4.79 0.00 0.00 102721.54 17476.27 152792.50 00:23:19.600 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x0 length 0x80000 00:23:19.600 Nvme2n3 : 5.11 1353.30 5.29 0.00 0.00 93496.93 10797.84 88379.98 00:23:19.600 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x80000 length 0x80000 00:23:19.600 Nvme2n3 : 5.10 1229.45 4.80 0.00 0.00 103867.10 17601.10 148797.93 00:23:19.600 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x0 length 0x20000 00:23:19.600 Nvme3n1 : 5.11 1352.91 5.28 0.00 0.00 93311.95 10735.42 88379.98 00:23:19.600 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.600 Verification LBA range: start 0x20000 length 0x20000 00:23:19.600 Nvme3n1 : 5.10 1229.07 4.80 0.00 0.00 103638.91 17725.93 145802.00 00:23:19.601 [2024-11-20T07:20:43.804Z] =================================================================================================================== 00:23:19.601 [2024-11-20T07:20:43.804Z] Total : 18049.14 70.50 0.00 0.00 98380.73 10735.42 152792.50 00:23:20.974 00:23:20.974 real 0m8.164s 00:23:20.974 user 0m14.871s 00:23:20.974 sys 0m0.385s 00:23:20.974 07:20:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.974 07:20:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:20.974 ************************************ 00:23:20.974 END TEST bdev_verify 00:23:20.974 ************************************ 00:23:20.974 07:20:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:20.974 07:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:20.974 07:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.974 07:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:23:20.974 ************************************ 00:23:20.974 START TEST bdev_verify_big_io 00:23:20.974 ************************************ 00:23:20.974 07:20:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:21.234 [2024-11-20 07:20:45.252098] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:21.234 [2024-11-20 07:20:45.252278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63937 ] 00:23:21.492 [2024-11-20 07:20:45.450797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.492 [2024-11-20 07:20:45.647988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.492 [2024-11-20 07:20:45.648000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.441 Running I/O for 5 seconds... 00:23:27.102 897.00 IOPS, 56.06 MiB/s [2024-11-20T07:20:52.681Z] 2327.50 IOPS, 145.47 MiB/s [2024-11-20T07:20:52.940Z] 2801.00 IOPS, 175.06 MiB/s 00:23:28.737 Latency(us) 00:23:28.737 [2024-11-20T07:20:52.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.737 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x0 length 0xbd0b 00:23:28.737 Nvme0n1 : 5.78 112.08 7.01 0.00 0.00 1089141.09 35451.86 1102502.77 00:23:28.737 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0xbd0b length 0xbd0b 00:23:28.737 Nvme0n1 : 5.80 114.21 7.14 0.00 0.00 1068216.09 19473.55 1094513.62 00:23:28.737 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x0 length 0x4ff8 00:23:28.737 Nvme1n1p1 : 5.78 113.07 7.07 0.00 0.00 1067508.24 100363.70 1318209.83 00:23:28.737 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x4ff8 length 0x4ff8 00:23:28.737 Nvme1n1p1 : 5.81 118.44 7.40 0.00 0.00 1017481.78 51430.16 1038589.56 00:23:28.737 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x0 length 0x4ff7 00:23:28.737 Nvme1n1p2 : 5.93 75.61 4.73 0.00 0.00 1552330.50 157785.72 2157070.63 00:23:28.737 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x4ff7 length 0x4ff7 00:23:28.737 Nvme1n1p2 : 5.81 112.68 7.04 0.00 0.00 1034719.63 86382.69 1653754.15 00:23:28.737 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x0 length 0x8000 00:23:28.737 Nvme2n1 : 5.82 120.17 7.51 0.00 0.00 964974.23 25340.59 1126470.22 00:23:28.737 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x8000 length 0x8000 00:23:28.737 Nvme2n1 : 5.90 119.75 7.48 0.00 0.00 949745.93 83886.08 1382123.03 00:23:28.737 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x0 length 0x8000 00:23:28.737 Nvme2n2 : 5.91 125.71 7.86 0.00 0.00 896932.84 44439.65 1150437.67 00:23:28.737 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x8000 length 0x8000 00:23:28.737 Nvme2n2 : 5.95 120.26 7.52 0.00 0.00 920365.67 34952.53 1709678.20 00:23:28.737 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x0 length 0x8000 00:23:28.737 Nvme2n3 : 5.91 125.47 7.84 0.00 0.00 873882.31 45188.63 1174405.12 00:23:28.737 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x8000 length 0x8000 00:23:28.737 Nvme2n3 : 6.01 127.92 8.00 0.00 0.00 840975.78 19723.22 1725656.50 00:23:28.737 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x0 length 0x2000 00:23:28.737 Nvme3n1 : 5.93 140.89 8.81 0.00 0.00 765584.56 7021.71 1190383.42 00:23:28.737 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:28.737 Verification LBA range: start 0x2000 length 0x2000 00:23:28.737 Nvme3n1 : 6.04 148.88 9.30 0.00 0.00 711755.08 1513.57 1757613.10 00:23:28.737 [2024-11-20T07:20:52.940Z] =================================================================================================================== 00:23:28.737 [2024-11-20T07:20:52.940Z] Total : 1675.14 104.70 0.00 0.00 955894.13 1513.57 2157070.63 00:23:31.340 00:23:31.340 real 0m9.785s 00:23:31.340 user 0m18.066s 00:23:31.340 sys 0m0.386s 00:23:31.340 07:20:54 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:31.340 07:20:54 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.340 ************************************ 00:23:31.340 END TEST bdev_verify_big_io 00:23:31.340 ************************************ 00:23:31.340 07:20:54 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:31.340 07:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:31.340 07:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:31.340 07:20:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:23:31.340 ************************************ 00:23:31.340 START TEST bdev_write_zeroes 00:23:31.340 ************************************ 00:23:31.340 07:20:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:31.340 [2024-11-20 07:20:55.086197] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:31.340 [2024-11-20 07:20:55.086351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64063 ] 00:23:31.340 [2024-11-20 07:20:55.272907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.340 [2024-11-20 07:20:55.416527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.272 Running I/O for 1 seconds... 00:23:33.253 49728.00 IOPS, 194.25 MiB/s 00:23:33.253 Latency(us) 00:23:33.253 [2024-11-20T07:20:57.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.253 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:33.253 Nvme0n1 : 1.03 7067.14 27.61 0.00 0.00 18056.54 15042.07 32955.25 00:23:33.253 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:33.253 Nvme1n1p1 : 1.03 7056.60 27.56 0.00 0.00 18057.87 14917.24 33953.89 00:23:33.253 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:33.253 Nvme1n1p2 : 1.04 7046.33 27.52 0.00 0.00 18011.15 15042.07 31706.94 00:23:33.253 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:33.253 Nvme2n1 : 1.04 7036.92 27.49 0.00 0.00 17939.30 15104.49 28711.01 00:23:33.253 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:33.253 Nvme2n2 : 1.04 7027.46 27.45 0.00 0.00 17900.29 14667.58 28086.86 00:23:33.253 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:33.253 Nvme2n3 : 1.04 7017.75 27.41 0.00 0.00 17865.39 12607.88 29085.50 00:23:33.253 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:33.253 Nvme3n1 : 1.04 7008.21 27.38 0.00 0.00 17834.04 11109.91 31082.79 00:23:33.253 [2024-11-20T07:20:57.456Z] =================================================================================================================== 00:23:33.253 [2024-11-20T07:20:57.456Z] Total : 49260.43 192.42 0.00 0.00 17952.08 11109.91 33953.89 00:23:34.629 00:23:34.629 real 0m3.703s 00:23:34.629 user 0m3.284s 00:23:34.629 sys 0m0.286s 00:23:34.629 07:20:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.629 07:20:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:34.629 ************************************ 00:23:34.629 END TEST bdev_write_zeroes 00:23:34.629 ************************************ 00:23:34.629 07:20:58 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:34.629 07:20:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:34.629 07:20:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.629 07:20:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:23:34.629 ************************************ 00:23:34.629 START TEST bdev_json_nonenclosed 00:23:34.629 ************************************ 00:23:34.629 07:20:58 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:34.889 [2024-11-20 07:20:58.832520] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:34.889 [2024-11-20 07:20:58.832766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64118 ] 00:23:34.889 [2024-11-20 07:20:59.026447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.177 [2024-11-20 07:20:59.215503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.177 [2024-11-20 07:20:59.215616] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:35.177 [2024-11-20 07:20:59.215643] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:35.177 [2024-11-20 07:20:59.215657] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:35.448 00:23:35.448 real 0m0.787s 00:23:35.448 user 0m0.513s 00:23:35.448 sys 0m0.166s 00:23:35.448 07:20:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.448 07:20:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:35.448 ************************************ 00:23:35.448 END TEST bdev_json_nonenclosed 00:23:35.448 ************************************ 00:23:35.448 07:20:59 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:35.448 07:20:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:35.448 07:20:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.448 07:20:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:23:35.448 ************************************ 00:23:35.448 START TEST bdev_json_nonarray 00:23:35.448 ************************************ 00:23:35.448 07:20:59 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:35.707 [2024-11-20 07:20:59.678157] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:35.707 [2024-11-20 07:20:59.678376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64149 ] 00:23:35.707 [2024-11-20 07:20:59.868170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.974 [2024-11-20 07:21:00.021584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.974 [2024-11-20 07:21:00.021721] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:35.974 [2024-11-20 07:21:00.021749] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:35.974 [2024-11-20 07:21:00.021765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:36.236 00:23:36.236 real 0m0.758s 00:23:36.236 user 0m0.477s 00:23:36.236 sys 0m0.174s 00:23:36.236 07:21:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.236 ************************************ 00:23:36.236 END TEST bdev_json_nonarray 00:23:36.236 ************************************ 00:23:36.236 07:21:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:36.236 07:21:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:23:36.236 07:21:00 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:23:36.236 07:21:00 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:23:36.236 07:21:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:36.236 07:21:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.236 07:21:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:23:36.236 ************************************ 00:23:36.236 START TEST bdev_gpt_uuid 00:23:36.236 ************************************ 00:23:36.236 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:23:36.236 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64180 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64180 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64180 ']' 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.237 07:21:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:23:36.495 [2024-11-20 07:21:00.552837] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:36.495 [2024-11-20 07:21:00.552994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64180 ] 00:23:36.753 [2024-11-20 07:21:00.731904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.753 [2024-11-20 07:21:00.878035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.130 07:21:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.131 07:21:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:23:38.131 07:21:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:38.131 07:21:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.131 07:21:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:23:38.131 Some configs were skipped because the RPC state that can call them passed over. 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:23:38.131 { 00:23:38.131 "name": "Nvme1n1p1", 00:23:38.131 "aliases": [ 00:23:38.131 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:23:38.131 ], 00:23:38.131 "product_name": "GPT Disk", 00:23:38.131 "block_size": 4096, 00:23:38.131 "num_blocks": 655104, 00:23:38.131 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:23:38.131 "assigned_rate_limits": { 00:23:38.131 "rw_ios_per_sec": 0, 00:23:38.131 "rw_mbytes_per_sec": 0, 00:23:38.131 "r_mbytes_per_sec": 0, 00:23:38.131 "w_mbytes_per_sec": 0 00:23:38.131 }, 00:23:38.131 "claimed": false, 00:23:38.131 "zoned": false, 00:23:38.131 "supported_io_types": { 00:23:38.131 "read": true, 00:23:38.131 "write": true, 00:23:38.131 "unmap": true, 00:23:38.131 "flush": true, 00:23:38.131 "reset": true, 00:23:38.131 "nvme_admin": false, 00:23:38.131 "nvme_io": false, 00:23:38.131 "nvme_io_md": false, 00:23:38.131 "write_zeroes": true, 00:23:38.131 "zcopy": false, 00:23:38.131 "get_zone_info": false, 00:23:38.131 "zone_management": false, 00:23:38.131 "zone_append": false, 00:23:38.131 "compare": true, 00:23:38.131 "compare_and_write": false, 00:23:38.131 "abort": true, 00:23:38.131 "seek_hole": false, 00:23:38.131 "seek_data": false, 00:23:38.131 "copy": true, 00:23:38.131 "nvme_iov_md": false 00:23:38.131 }, 00:23:38.131 "driver_specific": { 00:23:38.131 "gpt": { 00:23:38.131 "base_bdev": "Nvme1n1", 00:23:38.131 "offset_blocks": 256, 00:23:38.131 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:23:38.131 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:23:38.131 "partition_name": "SPDK_TEST_first" 00:23:38.131 } 00:23:38.131 } 00:23:38.131 } 00:23:38.131 ]' 00:23:38.131 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:23:38.389 { 00:23:38.389 "name": "Nvme1n1p2", 00:23:38.389 "aliases": [ 00:23:38.389 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:23:38.389 ], 00:23:38.389 "product_name": "GPT Disk", 00:23:38.389 "block_size": 4096, 00:23:38.389 "num_blocks": 655103, 00:23:38.389 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:23:38.389 "assigned_rate_limits": { 00:23:38.389 "rw_ios_per_sec": 0, 00:23:38.389 "rw_mbytes_per_sec": 0, 00:23:38.389 "r_mbytes_per_sec": 0, 00:23:38.389 "w_mbytes_per_sec": 0 00:23:38.389 }, 00:23:38.389 "claimed": false, 00:23:38.389 "zoned": false, 00:23:38.389 "supported_io_types": { 00:23:38.389 "read": true, 00:23:38.389 "write": true, 00:23:38.389 "unmap": true, 00:23:38.389 "flush": true, 00:23:38.389 "reset": true, 00:23:38.389 "nvme_admin": false, 00:23:38.389 "nvme_io": false, 00:23:38.389 "nvme_io_md": false, 00:23:38.389 "write_zeroes": true, 00:23:38.389 "zcopy": false, 00:23:38.389 "get_zone_info": false, 00:23:38.389 "zone_management": false, 00:23:38.389 "zone_append": false, 00:23:38.389 "compare": true, 00:23:38.389 "compare_and_write": false, 00:23:38.389 "abort": true, 00:23:38.389 "seek_hole": false, 00:23:38.389 "seek_data": false, 00:23:38.389 "copy": true, 00:23:38.389 "nvme_iov_md": false 00:23:38.389 }, 00:23:38.389 "driver_specific": { 00:23:38.389 "gpt": { 00:23:38.389 "base_bdev": "Nvme1n1", 00:23:38.389 "offset_blocks": 655360, 00:23:38.389 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:23:38.389 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:23:38.389 "partition_name": "SPDK_TEST_second" 00:23:38.389 } 00:23:38.389 } 00:23:38.389 } 00:23:38.389 ]' 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 64180 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64180 ']' 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64180 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.389 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64180 00:23:38.648 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.648 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.648 killing process with pid 64180 00:23:38.648 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64180' 00:23:38.648 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64180 00:23:38.648 07:21:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64180 00:23:41.937 00:23:41.938 real 0m5.084s 00:23:41.938 user 0m5.285s 00:23:41.938 sys 0m0.609s 00:23:41.938 07:21:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.938 07:21:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:23:41.938 ************************************ 00:23:41.938 END TEST bdev_gpt_uuid 00:23:41.938 ************************************ 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:23:41.938 07:21:05 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:41.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:41.938 Waiting for block devices as requested 00:23:41.938 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:42.196 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:42.196 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:42.454 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:47.795 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:47.795 07:21:11 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:23:47.795 07:21:11 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:23:47.795 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:23:47.795 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:23:47.795 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:23:47.795 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:23:47.795 07:21:11 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:23:47.795 00:23:47.795 real 1m12.351s 00:23:47.795 user 1m31.734s 00:23:47.795 sys 0m13.175s 00:23:47.795 07:21:11 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.795 07:21:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:23:47.795 ************************************ 00:23:47.795 END TEST blockdev_nvme_gpt 00:23:47.795 ************************************ 00:23:47.795 07:21:11 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:23:47.795 07:21:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:47.795 07:21:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.795 07:21:11 -- common/autotest_common.sh@10 -- # set +x 00:23:47.795 ************************************ 00:23:47.795 START TEST nvme 00:23:47.795 ************************************ 00:23:47.795 07:21:11 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:23:47.795 * Looking for test storage... 00:23:47.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:47.795 07:21:11 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:47.796 07:21:11 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:23:47.796 07:21:11 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:48.055 07:21:12 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:48.055 07:21:12 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.055 07:21:12 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.055 07:21:12 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.055 07:21:12 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.055 07:21:12 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.055 07:21:12 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.055 07:21:12 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.055 07:21:12 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.055 07:21:12 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.055 07:21:12 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.055 07:21:12 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.055 07:21:12 nvme -- scripts/common.sh@344 -- # case "$op" in 00:23:48.055 07:21:12 nvme -- scripts/common.sh@345 -- # : 1 00:23:48.055 07:21:12 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.055 07:21:12 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.055 07:21:12 nvme -- scripts/common.sh@365 -- # decimal 1 00:23:48.055 07:21:12 nvme -- scripts/common.sh@353 -- # local d=1 00:23:48.055 07:21:12 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.055 07:21:12 nvme -- scripts/common.sh@355 -- # echo 1 00:23:48.055 07:21:12 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.055 07:21:12 nvme -- scripts/common.sh@366 -- # decimal 2 00:23:48.055 07:21:12 nvme -- scripts/common.sh@353 -- # local d=2 00:23:48.055 07:21:12 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.055 07:21:12 nvme -- scripts/common.sh@355 -- # echo 2 00:23:48.055 07:21:12 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.055 07:21:12 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.055 07:21:12 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.055 07:21:12 nvme -- scripts/common.sh@368 -- # return 0 00:23:48.055 07:21:12 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.055 07:21:12 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:48.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.055 --rc genhtml_branch_coverage=1 00:23:48.055 --rc genhtml_function_coverage=1 00:23:48.055 --rc genhtml_legend=1 00:23:48.055 --rc geninfo_all_blocks=1 00:23:48.055 --rc geninfo_unexecuted_blocks=1 00:23:48.055 00:23:48.055 ' 00:23:48.055 07:21:12 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:48.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.055 --rc genhtml_branch_coverage=1 00:23:48.055 --rc genhtml_function_coverage=1 00:23:48.055 --rc genhtml_legend=1 00:23:48.055 --rc geninfo_all_blocks=1 00:23:48.055 --rc geninfo_unexecuted_blocks=1 00:23:48.055 00:23:48.055 ' 00:23:48.056 07:21:12 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:48.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.056 --rc genhtml_branch_coverage=1 00:23:48.056 --rc genhtml_function_coverage=1 00:23:48.056 --rc genhtml_legend=1 00:23:48.056 --rc geninfo_all_blocks=1 00:23:48.056 --rc geninfo_unexecuted_blocks=1 00:23:48.056 00:23:48.056 ' 00:23:48.056 07:21:12 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:48.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.056 --rc genhtml_branch_coverage=1 00:23:48.056 --rc genhtml_function_coverage=1 00:23:48.056 --rc genhtml_legend=1 00:23:48.056 --rc geninfo_all_blocks=1 00:23:48.056 --rc geninfo_unexecuted_blocks=1 00:23:48.056 00:23:48.056 ' 00:23:48.056 07:21:12 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:48.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:49.188 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:49.188 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:49.188 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:49.447 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:49.447 07:21:13 nvme -- nvme/nvme.sh@79 -- # uname 00:23:49.447 07:21:13 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:23:49.447 07:21:13 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:23:49.447 07:21:13 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1075 -- # stubpid=64841 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:23:49.447 Waiting for stub to ready for secondary processes... 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64841 ]] 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:23:49.447 07:21:13 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:23:49.447 [2024-11-20 07:21:13.621109] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:23:49.447 [2024-11-20 07:21:13.621304] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:23:50.412 07:21:14 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:23:50.412 07:21:14 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64841 ]] 00:23:50.412 07:21:14 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:23:50.978 [2024-11-20 07:21:15.107139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:51.236 [2024-11-20 07:21:15.278080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.236 [2024-11-20 07:21:15.278155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.236 [2024-11-20 07:21:15.278167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.236 [2024-11-20 07:21:15.301741] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:23:51.236 [2024-11-20 07:21:15.301860] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:23:51.236 [2024-11-20 07:21:15.314136] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:23:51.236 [2024-11-20 07:21:15.314327] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:23:51.236 [2024-11-20 07:21:15.317878] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:23:51.236 [2024-11-20 07:21:15.318197] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:23:51.236 [2024-11-20 07:21:15.318327] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:23:51.236 [2024-11-20 07:21:15.321702] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:23:51.237 [2024-11-20 07:21:15.322035] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:23:51.237 [2024-11-20 07:21:15.322529] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:23:51.237 [2024-11-20 07:21:15.325975] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:23:51.237 [2024-11-20 07:21:15.326374] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:23:51.237 [2024-11-20 07:21:15.326476] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:23:51.237 [2024-11-20 07:21:15.326546] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:23:51.237 [2024-11-20 07:21:15.326615] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:23:51.495 07:21:15 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:23:51.495 done. 00:23:51.495 07:21:15 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:23:51.495 07:21:15 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:23:51.495 07:21:15 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:23:51.495 07:21:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.495 07:21:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:51.495 ************************************ 00:23:51.495 START TEST nvme_reset 00:23:51.495 ************************************ 00:23:51.495 07:21:15 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:23:52.061 Initializing NVMe Controllers 00:23:52.061 Skipping QEMU NVMe SSD at 0000:00:10.0 00:23:52.061 Skipping QEMU NVMe SSD at 0000:00:11.0 00:23:52.061 Skipping QEMU NVMe SSD at 0000:00:13.0 00:23:52.061 Skipping QEMU NVMe SSD at 0000:00:12.0 00:23:52.061 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:23:52.061 00:23:52.061 real 0m0.408s 00:23:52.061 user 0m0.151s 00:23:52.061 sys 0m0.203s 00:23:52.061 07:21:15 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.061 ************************************ 00:23:52.061 END TEST nvme_reset 00:23:52.061 ************************************ 00:23:52.061 07:21:15 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:23:52.061 07:21:16 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:23:52.061 07:21:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:52.061 07:21:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.061 07:21:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:52.061 ************************************ 00:23:52.061 START TEST nvme_identify 00:23:52.061 ************************************ 00:23:52.061 07:21:16 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:23:52.061 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:23:52.061 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:23:52.061 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:23:52.061 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:23:52.061 07:21:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:23:52.061 07:21:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:23:52.061 07:21:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:52.061 07:21:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:23:52.061 07:21:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:52.061 07:21:16 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:23:52.061 07:21:16 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:23:52.061 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:23:52.323 [2024-11-20 07:21:16.477547] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64876 terminated unexpected 00:23:52.323 ===================================================== 00:23:52.323 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:52.323 ===================================================== 00:23:52.323 Controller Capabilities/Features 00:23:52.323 ================================ 00:23:52.323 Vendor ID: 1b36 00:23:52.323 Subsystem Vendor ID: 1af4 00:23:52.323 Serial Number: 12340 00:23:52.323 Model Number: QEMU NVMe Ctrl 00:23:52.323 Firmware Version: 8.0.0 00:23:52.323 Recommended Arb Burst: 6 00:23:52.323 IEEE OUI Identifier: 00 54 52 00:23:52.323 Multi-path I/O 00:23:52.323 May have multiple subsystem ports: No 00:23:52.323 May have multiple controllers: No 00:23:52.323 Associated with SR-IOV VF: No 00:23:52.323 Max Data Transfer Size: 524288 00:23:52.323 Max Number of Namespaces: 256 00:23:52.323 Max Number of I/O Queues: 64 00:23:52.323 NVMe Specification Version (VS): 1.4 00:23:52.323 NVMe Specification Version (Identify): 1.4 00:23:52.323 Maximum Queue Entries: 2048 00:23:52.323 Contiguous Queues Required: Yes 00:23:52.323 Arbitration Mechanisms Supported 00:23:52.323 Weighted Round Robin: Not Supported 00:23:52.323 Vendor Specific: Not Supported 00:23:52.323 Reset Timeout: 7500 ms 00:23:52.323 Doorbell Stride: 4 bytes 00:23:52.323 NVM Subsystem Reset: Not Supported 00:23:52.323 Command Sets Supported 00:23:52.323 NVM Command Set: Supported 00:23:52.323 Boot Partition: Not Supported 00:23:52.323 Memory Page Size Minimum: 4096 bytes 00:23:52.323 Memory Page Size Maximum: 65536 bytes 00:23:52.323 Persistent Memory Region: Not Supported 00:23:52.323 Optional Asynchronous Events Supported 00:23:52.323 Namespace Attribute Notices: Supported 00:23:52.323 Firmware Activation Notices: Not Supported 00:23:52.323 ANA Change Notices: Not Supported 00:23:52.323 PLE Aggregate Log Change Notices: Not Supported 00:23:52.323 LBA Status Info Alert Notices: Not Supported 00:23:52.323 EGE Aggregate Log Change Notices: Not Supported 00:23:52.323 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.323 Zone Descriptor Change Notices: Not Supported 00:23:52.323 Discovery Log Change Notices: Not Supported 00:23:52.323 Controller Attributes 00:23:52.323 128-bit Host Identifier: Not Supported 00:23:52.323 Non-Operational Permissive Mode: Not Supported 00:23:52.323 NVM Sets: Not Supported 00:23:52.323 Read Recovery Levels: Not Supported 00:23:52.323 Endurance Groups: Not Supported 00:23:52.323 Predictable Latency Mode: Not Supported 00:23:52.323 Traffic Based Keep ALive: Not Supported 00:23:52.323 Namespace Granularity: Not Supported 00:23:52.323 SQ Associations: Not Supported 00:23:52.323 UUID List: Not Supported 00:23:52.323 Multi-Domain Subsystem: Not Supported 00:23:52.323 Fixed Capacity Management: Not Supported 00:23:52.323 Variable Capacity Management: Not Supported 00:23:52.323 Delete Endurance Group: Not Supported 00:23:52.323 Delete NVM Set: Not Supported 00:23:52.323 Extended LBA Formats Supported: Supported 00:23:52.323 Flexible Data Placement Supported: Not Supported 00:23:52.323 00:23:52.323 Controller Memory Buffer Support 00:23:52.323 ================================ 00:23:52.323 Supported: No 00:23:52.323 00:23:52.323 Persistent Memory Region Support 00:23:52.323 ================================ 00:23:52.323 Supported: No 00:23:52.323 00:23:52.323 Admin Command Set Attributes 00:23:52.323 ============================ 00:23:52.323 Security Send/Receive: Not Supported 00:23:52.323 Format NVM: Supported 00:23:52.323 Firmware Activate/Download: Not Supported 00:23:52.323 Namespace Management: Supported 00:23:52.323 Device Self-Test: Not Supported 00:23:52.323 Directives: Supported 00:23:52.323 NVMe-MI: Not Supported 00:23:52.323 Virtualization Management: Not Supported 00:23:52.323 Doorbell Buffer Config: Supported 00:23:52.323 Get LBA Status Capability: Not Supported 00:23:52.323 Command & Feature Lockdown Capability: Not Supported 00:23:52.323 Abort Command Limit: 4 00:23:52.323 Async Event Request Limit: 4 00:23:52.323 Number of Firmware Slots: N/A 00:23:52.323 Firmware Slot 1 Read-Only: N/A 00:23:52.323 Firmware Activation Without Reset: N/A 00:23:52.323 Multiple Update Detection Support: N/A 00:23:52.323 Firmware Update Granularity: No Information Provided 00:23:52.323 Per-Namespace SMART Log: Yes 00:23:52.323 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.323 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:23:52.323 Command Effects Log Page: Supported 00:23:52.323 Get Log Page Extended Data: Supported 00:23:52.323 Telemetry Log Pages: Not Supported 00:23:52.323 Persistent Event Log Pages: Not Supported 00:23:52.323 Supported Log Pages Log Page: May Support 00:23:52.323 Commands Supported & Effects Log Page: Not Supported 00:23:52.323 Feature Identifiers & Effects Log Page:May Support 00:23:52.323 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.323 Data Area 4 for Telemetry Log: Not Supported 00:23:52.323 Error Log Page Entries Supported: 1 00:23:52.323 Keep Alive: Not Supported 00:23:52.323 00:23:52.323 NVM Command Set Attributes 00:23:52.323 ========================== 00:23:52.323 Submission Queue Entry Size 00:23:52.323 Max: 64 00:23:52.323 Min: 64 00:23:52.323 Completion Queue Entry Size 00:23:52.323 Max: 16 00:23:52.323 Min: 16 00:23:52.323 Number of Namespaces: 256 00:23:52.323 Compare Command: Supported 00:23:52.323 Write Uncorrectable Command: Not Supported 00:23:52.323 Dataset Management Command: Supported 00:23:52.323 Write Zeroes Command: Supported 00:23:52.323 Set Features Save Field: Supported 00:23:52.323 Reservations: Not Supported 00:23:52.323 Timestamp: Supported 00:23:52.323 Copy: Supported 00:23:52.323 Volatile Write Cache: Present 00:23:52.323 Atomic Write Unit (Normal): 1 00:23:52.323 Atomic Write Unit (PFail): 1 00:23:52.323 Atomic Compare & Write Unit: 1 00:23:52.323 Fused Compare & Write: Not Supported 00:23:52.323 Scatter-Gather List 00:23:52.323 SGL Command Set: Supported 00:23:52.323 SGL Keyed: Not Supported 00:23:52.323 SGL Bit Bucket Descriptor: Not Supported 00:23:52.323 SGL Metadata Pointer: Not Supported 00:23:52.323 Oversized SGL: Not Supported 00:23:52.323 SGL Metadata Address: Not Supported 00:23:52.323 SGL Offset: Not Supported 00:23:52.323 Transport SGL Data Block: Not Supported 00:23:52.323 Replay Protected Memory Block: Not Supported 00:23:52.323 00:23:52.323 Firmware Slot Information 00:23:52.323 ========================= 00:23:52.323 Active slot: 1 00:23:52.323 Slot 1 Firmware Revision: 1.0 00:23:52.323 00:23:52.323 00:23:52.323 Commands Supported and Effects 00:23:52.323 ============================== 00:23:52.323 Admin Commands 00:23:52.323 -------------- 00:23:52.323 Delete I/O Submission Queue (00h): Supported 00:23:52.323 Create I/O Submission Queue (01h): Supported 00:23:52.323 Get Log Page (02h): Supported 00:23:52.323 Delete I/O Completion Queue (04h): Supported 00:23:52.323 Create I/O Completion Queue (05h): Supported 00:23:52.323 Identify (06h): Supported 00:23:52.323 Abort (08h): Supported 00:23:52.323 Set Features (09h): Supported 00:23:52.323 Get Features (0Ah): Supported 00:23:52.323 Asynchronous Event Request (0Ch): Supported 00:23:52.323 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:52.323 Directive Send (19h): Supported 00:23:52.323 Directive Receive (1Ah): Supported 00:23:52.323 Virtualization Management (1Ch): Supported 00:23:52.323 Doorbell Buffer Config (7Ch): Supported 00:23:52.323 Format NVM (80h): Supported LBA-Change 00:23:52.323 I/O Commands 00:23:52.323 ------------ 00:23:52.323 Flush (00h): Supported LBA-Change 00:23:52.323 Write (01h): Supported LBA-Change 00:23:52.323 Read (02h): Supported 00:23:52.323 Compare (05h): Supported 00:23:52.323 Write Zeroes (08h): Supported LBA-Change 00:23:52.323 Dataset Management (09h): Supported LBA-Change 00:23:52.323 Unknown (0Ch): Supported 00:23:52.324 Unknown (12h): Supported 00:23:52.324 Copy (19h): Supported LBA-Change 00:23:52.324 Unknown (1Dh): Supported LBA-Change 00:23:52.324 00:23:52.324 Error Log 00:23:52.324 ========= 00:23:52.324 00:23:52.324 Arbitration 00:23:52.324 =========== 00:23:52.324 Arbitration Burst: no limit 00:23:52.324 00:23:52.324 Power Management 00:23:52.324 ================ 00:23:52.324 Number of Power States: 1 00:23:52.324 Current Power State: Power State #0 00:23:52.324 Power State #0: 00:23:52.324 Max Power: 25.00 W 00:23:52.324 Non-Operational State: Operational 00:23:52.324 Entry Latency: 16 microseconds 00:23:52.324 Exit Latency: 4 microseconds 00:23:52.324 Relative Read Throughput: 0 00:23:52.324 Relative Read Latency: 0 00:23:52.324 Relative Write Throughput: 0 00:23:52.324 Relative Write Latency: 0 00:23:52.324 Idle Power[2024-11-20 07:21:16.479435] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64876 terminated unexpected 00:23:52.324 : Not Reported 00:23:52.324 Active Power: Not Reported 00:23:52.324 Non-Operational Permissive Mode: Not Supported 00:23:52.324 00:23:52.324 Health Information 00:23:52.324 ================== 00:23:52.324 Critical Warnings: 00:23:52.324 Available Spare Space: OK 00:23:52.324 Temperature: OK 00:23:52.324 Device Reliability: OK 00:23:52.324 Read Only: No 00:23:52.324 Volatile Memory Backup: OK 00:23:52.324 Current Temperature: 323 Kelvin (50 Celsius) 00:23:52.324 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:52.324 Available Spare: 0% 00:23:52.324 Available Spare Threshold: 0% 00:23:52.324 Life Percentage Used: 0% 00:23:52.324 Data Units Read: 656 00:23:52.324 Data Units Written: 584 00:23:52.324 Host Read Commands: 31123 00:23:52.324 Host Write Commands: 30909 00:23:52.324 Controller Busy Time: 0 minutes 00:23:52.324 Power Cycles: 0 00:23:52.324 Power On Hours: 0 hours 00:23:52.324 Unsafe Shutdowns: 0 00:23:52.324 Unrecoverable Media Errors: 0 00:23:52.324 Lifetime Error Log Entries: 0 00:23:52.324 Warning Temperature Time: 0 minutes 00:23:52.324 Critical Temperature Time: 0 minutes 00:23:52.324 00:23:52.324 Number of Queues 00:23:52.324 ================ 00:23:52.324 Number of I/O Submission Queues: 64 00:23:52.324 Number of I/O Completion Queues: 64 00:23:52.324 00:23:52.324 ZNS Specific Controller Data 00:23:52.324 ============================ 00:23:52.324 Zone Append Size Limit: 0 00:23:52.324 00:23:52.324 00:23:52.324 Active Namespaces 00:23:52.324 ================= 00:23:52.324 Namespace ID:1 00:23:52.324 Error Recovery Timeout: Unlimited 00:23:52.324 Command Set Identifier: NVM (00h) 00:23:52.324 Deallocate: Supported 00:23:52.324 Deallocated/Unwritten Error: Supported 00:23:52.324 Deallocated Read Value: All 0x00 00:23:52.324 Deallocate in Write Zeroes: Not Supported 00:23:52.324 Deallocated Guard Field: 0xFFFF 00:23:52.324 Flush: Supported 00:23:52.324 Reservation: Not Supported 00:23:52.324 Metadata Transferred as: Separate Metadata Buffer 00:23:52.324 Namespace Sharing Capabilities: Private 00:23:52.324 Size (in LBAs): 1548666 (5GiB) 00:23:52.324 Capacity (in LBAs): 1548666 (5GiB) 00:23:52.324 Utilization (in LBAs): 1548666 (5GiB) 00:23:52.324 Thin Provisioning: Not Supported 00:23:52.324 Per-NS Atomic Units: No 00:23:52.324 Maximum Single Source Range Length: 128 00:23:52.324 Maximum Copy Length: 128 00:23:52.324 Maximum Source Range Count: 128 00:23:52.324 NGUID/EUI64 Never Reused: No 00:23:52.324 Namespace Write Protected: No 00:23:52.324 Number of LBA Formats: 8 00:23:52.324 Current LBA Format: LBA Format #07 00:23:52.324 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.324 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:52.324 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:52.324 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:52.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:52.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:52.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:52.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:52.324 00:23:52.324 NVM Specific Namespace Data 00:23:52.324 =========================== 00:23:52.324 Logical Block Storage Tag Mask: 0 00:23:52.324 Protection Information Capabilities: 00:23:52.324 16b Guard Protection Information Storage Tag Support: No 00:23:52.324 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:52.324 Storage Tag Check Read Support: No 00:23:52.324 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.324 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.324 ===================================================== 00:23:52.324 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:52.324 ===================================================== 00:23:52.324 Controller Capabilities/Features 00:23:52.324 ================================ 00:23:52.324 Vendor ID: 1b36 00:23:52.324 Subsystem Vendor ID: 1af4 00:23:52.324 Serial Number: 12341 00:23:52.324 Model Number: QEMU NVMe Ctrl 00:23:52.324 Firmware Version: 8.0.0 00:23:52.324 Recommended Arb Burst: 6 00:23:52.324 IEEE OUI Identifier: 00 54 52 00:23:52.324 Multi-path I/O 00:23:52.324 May have multiple subsystem ports: No 00:23:52.324 May have multiple controllers: No 00:23:52.324 Associated with SR-IOV VF: No 00:23:52.324 Max Data Transfer Size: 524288 00:23:52.324 Max Number of Namespaces: 256 00:23:52.324 Max Number of I/O Queues: 64 00:23:52.324 NVMe Specification Version (VS): 1.4 00:23:52.324 NVMe Specification Version (Identify): 1.4 00:23:52.324 Maximum Queue Entries: 2048 00:23:52.324 Contiguous Queues Required: Yes 00:23:52.324 Arbitration Mechanisms Supported 00:23:52.324 Weighted Round Robin: Not Supported 00:23:52.324 Vendor Specific: Not Supported 00:23:52.324 Reset Timeout: 7500 ms 00:23:52.324 Doorbell Stride: 4 bytes 00:23:52.324 NVM Subsystem Reset: Not Supported 00:23:52.324 Command Sets Supported 00:23:52.324 NVM Command Set: Supported 00:23:52.324 Boot Partition: Not Supported 00:23:52.324 Memory Page Size Minimum: 4096 bytes 00:23:52.324 Memory Page Size Maximum: 65536 bytes 00:23:52.324 Persistent Memory Region: Not Supported 00:23:52.324 Optional Asynchronous Events Supported 00:23:52.324 Namespace Attribute Notices: Supported 00:23:52.324 Firmware Activation Notices: Not Supported 00:23:52.324 ANA Change Notices: Not Supported 00:23:52.324 PLE Aggregate Log Change Notices: Not Supported 00:23:52.324 LBA Status Info Alert Notices: Not Supported 00:23:52.324 EGE Aggregate Log Change Notices: Not Supported 00:23:52.324 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.324 Zone Descriptor Change Notices: Not Supported 00:23:52.324 Discovery Log Change Notices: Not Supported 00:23:52.324 Controller Attributes 00:23:52.324 128-bit Host Identifier: Not Supported 00:23:52.324 Non-Operational Permissive Mode: Not Supported 00:23:52.324 NVM Sets: Not Supported 00:23:52.324 Read Recovery Levels: Not Supported 00:23:52.324 Endurance Groups: Not Supported 00:23:52.324 Predictable Latency Mode: Not Supported 00:23:52.324 Traffic Based Keep ALive: Not Supported 00:23:52.324 Namespace Granularity: Not Supported 00:23:52.324 SQ Associations: Not Supported 00:23:52.324 UUID List: Not Supported 00:23:52.324 Multi-Domain Subsystem: Not Supported 00:23:52.324 Fixed Capacity Management: Not Supported 00:23:52.324 Variable Capacity Management: Not Supported 00:23:52.324 Delete Endurance Group: Not Supported 00:23:52.324 Delete NVM Set: Not Supported 00:23:52.324 Extended LBA Formats Supported: Supported 00:23:52.324 Flexible Data Placement Supported: Not Supported 00:23:52.324 00:23:52.324 Controller Memory Buffer Support 00:23:52.324 ================================ 00:23:52.324 Supported: No 00:23:52.324 00:23:52.324 Persistent Memory Region Support 00:23:52.324 ================================ 00:23:52.324 Supported: No 00:23:52.324 00:23:52.324 Admin Command Set Attributes 00:23:52.324 ============================ 00:23:52.324 Security Send/Receive: Not Supported 00:23:52.324 Format NVM: Supported 00:23:52.324 Firmware Activate/Download: Not Supported 00:23:52.324 Namespace Management: Supported 00:23:52.324 Device Self-Test: Not Supported 00:23:52.324 Directives: Supported 00:23:52.324 NVMe-MI: Not Supported 00:23:52.325 Virtualization Management: Not Supported 00:23:52.325 Doorbell Buffer Config: Supported 00:23:52.325 Get LBA Status Capability: Not Supported 00:23:52.325 Command & Feature Lockdown Capability: Not Supported 00:23:52.325 Abort Command Limit: 4 00:23:52.325 Async Event Request Limit: 4 00:23:52.325 Number of Firmware Slots: N/A 00:23:52.325 Firmware Slot 1 Read-Only: N/A 00:23:52.325 Firmware Activation Without Reset: N/A 00:23:52.325 Multiple Update Detection Support: N/A 00:23:52.325 Firmware Update Granularity: No Information Provided 00:23:52.325 Per-Namespace SMART Log: Yes 00:23:52.325 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.325 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:23:52.325 Command Effects Log Page: Supported 00:23:52.325 Get Log Page Extended Data: Supported 00:23:52.325 Telemetry Log Pages: Not Supported 00:23:52.325 Persistent Event Log Pages: Not Supported 00:23:52.325 Supported Log Pages Log Page: May Support 00:23:52.325 Commands Supported & Effects Log Page: Not Supported 00:23:52.325 Feature Identifiers & Effects Log Page:May Support 00:23:52.325 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.325 Data Area 4 for Telemetry Log: Not Supported 00:23:52.325 Error Log Page Entries Supported: 1 00:23:52.325 Keep Alive: Not Supported 00:23:52.325 00:23:52.325 NVM Command Set Attributes 00:23:52.325 ========================== 00:23:52.325 Submission Queue Entry Size 00:23:52.325 Max: 64 00:23:52.325 Min: 64 00:23:52.325 Completion Queue Entry Size 00:23:52.325 Max: 16 00:23:52.325 Min: 16 00:23:52.325 Number of Namespaces: 256 00:23:52.325 Compare Command: Supported 00:23:52.325 Write Uncorrectable Command: Not Supported 00:23:52.325 Dataset Management Command: Supported 00:23:52.325 Write Zeroes Command: Supported 00:23:52.325 Set Features Save Field: Supported 00:23:52.325 Reservations: Not Supported 00:23:52.325 Timestamp: Supported 00:23:52.325 Copy: Supported 00:23:52.325 Volatile Write Cache: Present 00:23:52.325 Atomic Write Unit (Normal): 1 00:23:52.325 Atomic Write Unit (PFail): 1 00:23:52.325 Atomic Compare & Write Unit: 1 00:23:52.325 Fused Compare & Write: Not Supported 00:23:52.325 Scatter-Gather List 00:23:52.325 SGL Command Set: Supported 00:23:52.325 SGL Keyed: Not Supported 00:23:52.325 SGL Bit Bucket Descriptor: Not Supported 00:23:52.325 SGL Metadata Pointer: Not Supported 00:23:52.325 Oversized SGL: Not Supported 00:23:52.325 SGL Metadata Address: Not Supported 00:23:52.325 SGL Offset: Not Supported 00:23:52.325 Transport SGL Data Block: Not Supported 00:23:52.325 Replay Protected Memory Block: Not Supported 00:23:52.325 00:23:52.325 Firmware Slot Information 00:23:52.325 ========================= 00:23:52.325 Active slot: 1 00:23:52.325 Slot 1 Firmware Revision: 1.0 00:23:52.325 00:23:52.325 00:23:52.325 Commands Supported and Effects 00:23:52.325 ============================== 00:23:52.325 Admin Commands 00:23:52.325 -------------- 00:23:52.325 Delete I/O Submission Queue (00h): Supported 00:23:52.325 Create I/O Submission Queue (01h): Supported 00:23:52.325 Get Log Page (02h): Supported 00:23:52.325 Delete I/O Completion Queue (04h): Supported 00:23:52.325 Create I/O Completion Queue (05h): Supported 00:23:52.325 Identify (06h): Supported 00:23:52.325 Abort (08h): Supported 00:23:52.325 Set Features (09h): Supported 00:23:52.325 Get Features (0Ah): Supported 00:23:52.325 Asynchronous Event Request (0Ch): Supported 00:23:52.325 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:52.325 Directive Send (19h): Supported 00:23:52.325 Directive Receive (1Ah): Supported 00:23:52.325 Virtualization Management (1Ch): Supported 00:23:52.325 Doorbell Buffer Config (7Ch): Supported 00:23:52.325 Format NVM (80h): Supported LBA-Change 00:23:52.325 I/O Commands 00:23:52.325 ------------ 00:23:52.325 Flush (00h): Supported LBA-Change 00:23:52.325 Write (01h): Supported LBA-Change 00:23:52.325 Read (02h): Supported 00:23:52.325 Compare (05h): Supported 00:23:52.325 Write Zeroes (08h): Supported LBA-Change 00:23:52.325 Dataset Management (09h): Supported LBA-Change 00:23:52.325 Unknown (0Ch): Supported 00:23:52.325 Unknown (12h): Supported 00:23:52.325 Copy (19h): Supported LBA-Change 00:23:52.325 Unknown (1Dh): Supported LBA-Change 00:23:52.325 00:23:52.325 Error Log 00:23:52.325 ========= 00:23:52.325 00:23:52.325 Arbitration 00:23:52.325 =========== 00:23:52.325 Arbitration Burst: no limit 00:23:52.325 00:23:52.325 Power Management 00:23:52.325 ================ 00:23:52.325 Number of Power States: 1 00:23:52.325 Current Power State: Power State #0 00:23:52.325 Power State #0: 00:23:52.325 Max Power: 25.00 W 00:23:52.325 Non-Operational State: Operational 00:23:52.325 Entry Latency: 16 microseconds 00:23:52.325 Exit Latency: 4 microseconds 00:23:52.325 Relative Read Throughput: 0 00:23:52.325 Relative Read Latency: 0 00:23:52.325 Relative Write Throughput: 0 00:23:52.325 Relative Write Latency: 0 00:23:52.325 Idle Power: Not Reported 00:23:52.325 Active Power: Not Reported 00:23:52.325 Non-Operational Permissive Mode: Not Supported 00:23:52.325 00:23:52.325 Health Information 00:23:52.325 ================== 00:23:52.325 Critical Warnings: 00:23:52.325 Available Spare Space: OK 00:23:52.325 Temperature: [2024-11-20 07:21:16.480274] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64876 terminated unexpected 00:23:52.325 OK 00:23:52.325 Device Reliability: OK 00:23:52.325 Read Only: No 00:23:52.325 Volatile Memory Backup: OK 00:23:52.325 Current Temperature: 323 Kelvin (50 Celsius) 00:23:52.325 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:52.325 Available Spare: 0% 00:23:52.325 Available Spare Threshold: 0% 00:23:52.325 Life Percentage Used: 0% 00:23:52.325 Data Units Read: 976 00:23:52.325 Data Units Written: 843 00:23:52.325 Host Read Commands: 46456 00:23:52.325 Host Write Commands: 45235 00:23:52.325 Controller Busy Time: 0 minutes 00:23:52.325 Power Cycles: 0 00:23:52.325 Power On Hours: 0 hours 00:23:52.325 Unsafe Shutdowns: 0 00:23:52.325 Unrecoverable Media Errors: 0 00:23:52.325 Lifetime Error Log Entries: 0 00:23:52.325 Warning Temperature Time: 0 minutes 00:23:52.325 Critical Temperature Time: 0 minutes 00:23:52.325 00:23:52.325 Number of Queues 00:23:52.325 ================ 00:23:52.325 Number of I/O Submission Queues: 64 00:23:52.325 Number of I/O Completion Queues: 64 00:23:52.325 00:23:52.325 ZNS Specific Controller Data 00:23:52.325 ============================ 00:23:52.325 Zone Append Size Limit: 0 00:23:52.325 00:23:52.325 00:23:52.325 Active Namespaces 00:23:52.325 ================= 00:23:52.325 Namespace ID:1 00:23:52.325 Error Recovery Timeout: Unlimited 00:23:52.325 Command Set Identifier: NVM (00h) 00:23:52.325 Deallocate: Supported 00:23:52.325 Deallocated/Unwritten Error: Supported 00:23:52.325 Deallocated Read Value: All 0x00 00:23:52.325 Deallocate in Write Zeroes: Not Supported 00:23:52.325 Deallocated Guard Field: 0xFFFF 00:23:52.325 Flush: Supported 00:23:52.325 Reservation: Not Supported 00:23:52.325 Namespace Sharing Capabilities: Private 00:23:52.325 Size (in LBAs): 1310720 (5GiB) 00:23:52.325 Capacity (in LBAs): 1310720 (5GiB) 00:23:52.325 Utilization (in LBAs): 1310720 (5GiB) 00:23:52.325 Thin Provisioning: Not Supported 00:23:52.325 Per-NS Atomic Units: No 00:23:52.325 Maximum Single Source Range Length: 128 00:23:52.325 Maximum Copy Length: 128 00:23:52.325 Maximum Source Range Count: 128 00:23:52.325 NGUID/EUI64 Never Reused: No 00:23:52.325 Namespace Write Protected: No 00:23:52.325 Number of LBA Formats: 8 00:23:52.325 Current LBA Format: LBA Format #04 00:23:52.325 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.325 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:52.325 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:52.325 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:52.325 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:52.325 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:52.325 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:52.325 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:52.325 00:23:52.325 NVM Specific Namespace Data 00:23:52.325 =========================== 00:23:52.325 Logical Block Storage Tag Mask: 0 00:23:52.325 Protection Information Capabilities: 00:23:52.325 16b Guard Protection Information Storage Tag Support: No 00:23:52.325 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:52.325 Storage Tag Check Read Support: No 00:23:52.325 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.325 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.325 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.326 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.326 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.326 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.326 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.326 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.326 ===================================================== 00:23:52.326 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:52.326 ===================================================== 00:23:52.326 Controller Capabilities/Features 00:23:52.326 ================================ 00:23:52.326 Vendor ID: 1b36 00:23:52.326 Subsystem Vendor ID: 1af4 00:23:52.326 Serial Number: 12343 00:23:52.326 Model Number: QEMU NVMe Ctrl 00:23:52.326 Firmware Version: 8.0.0 00:23:52.326 Recommended Arb Burst: 6 00:23:52.326 IEEE OUI Identifier: 00 54 52 00:23:52.326 Multi-path I/O 00:23:52.326 May have multiple subsystem ports: No 00:23:52.326 May have multiple controllers: Yes 00:23:52.326 Associated with SR-IOV VF: No 00:23:52.326 Max Data Transfer Size: 524288 00:23:52.326 Max Number of Namespaces: 256 00:23:52.326 Max Number of I/O Queues: 64 00:23:52.326 NVMe Specification Version (VS): 1.4 00:23:52.326 NVMe Specification Version (Identify): 1.4 00:23:52.326 Maximum Queue Entries: 2048 00:23:52.326 Contiguous Queues Required: Yes 00:23:52.326 Arbitration Mechanisms Supported 00:23:52.326 Weighted Round Robin: Not Supported 00:23:52.326 Vendor Specific: Not Supported 00:23:52.326 Reset Timeout: 7500 ms 00:23:52.326 Doorbell Stride: 4 bytes 00:23:52.326 NVM Subsystem Reset: Not Supported 00:23:52.326 Command Sets Supported 00:23:52.326 NVM Command Set: Supported 00:23:52.326 Boot Partition: Not Supported 00:23:52.326 Memory Page Size Minimum: 4096 bytes 00:23:52.326 Memory Page Size Maximum: 65536 bytes 00:23:52.326 Persistent Memory Region: Not Supported 00:23:52.326 Optional Asynchronous Events Supported 00:23:52.326 Namespace Attribute Notices: Supported 00:23:52.326 Firmware Activation Notices: Not Supported 00:23:52.326 ANA Change Notices: Not Supported 00:23:52.326 PLE Aggregate Log Change Notices: Not Supported 00:23:52.326 LBA Status Info Alert Notices: Not Supported 00:23:52.326 EGE Aggregate Log Change Notices: Not Supported 00:23:52.326 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.326 Zone Descriptor Change Notices: Not Supported 00:23:52.326 Discovery Log Change Notices: Not Supported 00:23:52.326 Controller Attributes 00:23:52.326 128-bit Host Identifier: Not Supported 00:23:52.326 Non-Operational Permissive Mode: Not Supported 00:23:52.326 NVM Sets: Not Supported 00:23:52.326 Read Recovery Levels: Not Supported 00:23:52.326 Endurance Groups: Supported 00:23:52.326 Predictable Latency Mode: Not Supported 00:23:52.326 Traffic Based Keep ALive: Not Supported 00:23:52.326 Namespace Granularity: Not Supported 00:23:52.326 SQ Associations: Not Supported 00:23:52.326 UUID List: Not Supported 00:23:52.326 Multi-Domain Subsystem: Not Supported 00:23:52.326 Fixed Capacity Management: Not Supported 00:23:52.326 Variable Capacity Management: Not Supported 00:23:52.326 Delete Endurance Group: Not Supported 00:23:52.326 Delete NVM Set: Not Supported 00:23:52.326 Extended LBA Formats Supported: Supported 00:23:52.326 Flexible Data Placement Supported: Supported 00:23:52.326 00:23:52.326 Controller Memory Buffer Support 00:23:52.326 ================================ 00:23:52.326 Supported: No 00:23:52.326 00:23:52.326 Persistent Memory Region Support 00:23:52.326 ================================ 00:23:52.326 Supported: No 00:23:52.326 00:23:52.326 Admin Command Set Attributes 00:23:52.326 ============================ 00:23:52.326 Security Send/Receive: Not Supported 00:23:52.326 Format NVM: Supported 00:23:52.326 Firmware Activate/Download: Not Supported 00:23:52.326 Namespace Management: Supported 00:23:52.326 Device Self-Test: Not Supported 00:23:52.326 Directives: Supported 00:23:52.326 NVMe-MI: Not Supported 00:23:52.326 Virtualization Management: Not Supported 00:23:52.326 Doorbell Buffer Config: Supported 00:23:52.326 Get LBA Status Capability: Not Supported 00:23:52.326 Command & Feature Lockdown Capability: Not Supported 00:23:52.326 Abort Command Limit: 4 00:23:52.326 Async Event Request Limit: 4 00:23:52.326 Number of Firmware Slots: N/A 00:23:52.326 Firmware Slot 1 Read-Only: N/A 00:23:52.326 Firmware Activation Without Reset: N/A 00:23:52.326 Multiple Update Detection Support: N/A 00:23:52.326 Firmware Update Granularity: No Information Provided 00:23:52.326 Per-Namespace SMART Log: Yes 00:23:52.326 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.326 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:23:52.326 Command Effects Log Page: Supported 00:23:52.326 Get Log Page Extended Data: Supported 00:23:52.326 Telemetry Log Pages: Not Supported 00:23:52.326 Persistent Event Log Pages: Not Supported 00:23:52.326 Supported Log Pages Log Page: May Support 00:23:52.326 Commands Supported & Effects Log Page: Not Supported 00:23:52.326 Feature Identifiers & Effects Log Page:May Support 00:23:52.326 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.326 Data Area 4 for Telemetry Log: Not Supported 00:23:52.326 Error Log Page Entries Supported: 1 00:23:52.326 Keep Alive: Not Supported 00:23:52.326 00:23:52.326 NVM Command Set Attributes 00:23:52.326 ========================== 00:23:52.326 Submission Queue Entry Size 00:23:52.326 Max: 64 00:23:52.326 Min: 64 00:23:52.326 Completion Queue Entry Size 00:23:52.326 Max: 16 00:23:52.326 Min: 16 00:23:52.326 Number of Namespaces: 256 00:23:52.326 Compare Command: Supported 00:23:52.326 Write Uncorrectable Command: Not Supported 00:23:52.326 Dataset Management Command: Supported 00:23:52.326 Write Zeroes Command: Supported 00:23:52.326 Set Features Save Field: Supported 00:23:52.326 Reservations: Not Supported 00:23:52.326 Timestamp: Supported 00:23:52.326 Copy: Supported 00:23:52.326 Volatile Write Cache: Present 00:23:52.326 Atomic Write Unit (Normal): 1 00:23:52.326 Atomic Write Unit (PFail): 1 00:23:52.326 Atomic Compare & Write Unit: 1 00:23:52.326 Fused Compare & Write: Not Supported 00:23:52.326 Scatter-Gather List 00:23:52.326 SGL Command Set: Supported 00:23:52.326 SGL Keyed: Not Supported 00:23:52.326 SGL Bit Bucket Descriptor: Not Supported 00:23:52.326 SGL Metadata Pointer: Not Supported 00:23:52.326 Oversized SGL: Not Supported 00:23:52.326 SGL Metadata Address: Not Supported 00:23:52.326 SGL Offset: Not Supported 00:23:52.326 Transport SGL Data Block: Not Supported 00:23:52.326 Replay Protected Memory Block: Not Supported 00:23:52.326 00:23:52.326 Firmware Slot Information 00:23:52.326 ========================= 00:23:52.326 Active slot: 1 00:23:52.326 Slot 1 Firmware Revision: 1.0 00:23:52.326 00:23:52.326 00:23:52.326 Commands Supported and Effects 00:23:52.326 ============================== 00:23:52.326 Admin Commands 00:23:52.326 -------------- 00:23:52.326 Delete I/O Submission Queue (00h): Supported 00:23:52.326 Create I/O Submission Queue (01h): Supported 00:23:52.326 Get Log Page (02h): Supported 00:23:52.326 Delete I/O Completion Queue (04h): Supported 00:23:52.326 Create I/O Completion Queue (05h): Supported 00:23:52.326 Identify (06h): Supported 00:23:52.326 Abort (08h): Supported 00:23:52.326 Set Features (09h): Supported 00:23:52.326 Get Features (0Ah): Supported 00:23:52.326 Asynchronous Event Request (0Ch): Supported 00:23:52.326 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:52.326 Directive Send (19h): Supported 00:23:52.326 Directive Receive (1Ah): Supported 00:23:52.326 Virtualization Management (1Ch): Supported 00:23:52.326 Doorbell Buffer Config (7Ch): Supported 00:23:52.326 Format NVM (80h): Supported LBA-Change 00:23:52.326 I/O Commands 00:23:52.326 ------------ 00:23:52.326 Flush (00h): Supported LBA-Change 00:23:52.326 Write (01h): Supported LBA-Change 00:23:52.326 Read (02h): Supported 00:23:52.326 Compare (05h): Supported 00:23:52.326 Write Zeroes (08h): Supported LBA-Change 00:23:52.326 Dataset Management (09h): Supported LBA-Change 00:23:52.326 Unknown (0Ch): Supported 00:23:52.326 Unknown (12h): Supported 00:23:52.326 Copy (19h): Supported LBA-Change 00:23:52.326 Unknown (1Dh): Supported LBA-Change 00:23:52.326 00:23:52.326 Error Log 00:23:52.326 ========= 00:23:52.326 00:23:52.326 Arbitration 00:23:52.326 =========== 00:23:52.326 Arbitration Burst: no limit 00:23:52.326 00:23:52.326 Power Management 00:23:52.326 ================ 00:23:52.326 Number of Power States: 1 00:23:52.326 Current Power State: Power State #0 00:23:52.326 Power State #0: 00:23:52.326 Max Power: 25.00 W 00:23:52.327 Non-Operational State: Operational 00:23:52.327 Entry Latency: 16 microseconds 00:23:52.327 Exit Latency: 4 microseconds 00:23:52.327 Relative Read Throughput: 0 00:23:52.327 Relative Read Latency: 0 00:23:52.327 Relative Write Throughput: 0 00:23:52.327 Relative Write Latency: 0 00:23:52.327 Idle Power: Not Reported 00:23:52.327 Active Power: Not Reported 00:23:52.327 Non-Operational Permissive Mode: Not Supported 00:23:52.327 00:23:52.327 Health Information 00:23:52.327 ================== 00:23:52.327 Critical Warnings: 00:23:52.327 Available Spare Space: OK 00:23:52.327 Temperature: OK 00:23:52.327 Device Reliability: OK 00:23:52.327 Read Only: No 00:23:52.327 Volatile Memory Backup: OK 00:23:52.327 Current Temperature: 323 Kelvin (50 Celsius) 00:23:52.327 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:52.327 Available Spare: 0% 00:23:52.327 Available Spare Threshold: 0% 00:23:52.327 Life Percentage Used: 0% 00:23:52.327 Data Units Read: 747 00:23:52.327 Data Units Written: 676 00:23:52.327 Host Read Commands: 32193 00:23:52.327 Host Write Commands: 31616 00:23:52.327 Controller Busy Time: 0 minutes 00:23:52.327 Power Cycles: 0 00:23:52.327 Power On Hours: 0 hours 00:23:52.327 Unsafe Shutdowns: 0 00:23:52.327 Unrecoverable Media Errors: 0 00:23:52.327 Lifetime Error Log Entries: 0 00:23:52.327 Warning Temperature Time: 0 minutes 00:23:52.327 Critical Temperature Time: 0 minutes 00:23:52.327 00:23:52.327 Number of Queues 00:23:52.327 ================ 00:23:52.327 Number of I/O Submission Queues: 64 00:23:52.327 Number of I/O Completion Queues: 64 00:23:52.327 00:23:52.327 ZNS Specific Controller Data 00:23:52.327 ============================ 00:23:52.327 Zone Append Size Limit: 0 00:23:52.327 00:23:52.327 00:23:52.327 Active Namespaces 00:23:52.327 ================= 00:23:52.327 Namespace ID:1 00:23:52.327 Error Recovery Timeout: Unlimited 00:23:52.327 Command Set Identifier: NVM (00h) 00:23:52.327 Deallocate: Supported 00:23:52.327 Deallocated/Unwritten Error: Supported 00:23:52.327 Deallocated Read Value: All 0x00 00:23:52.327 Deallocate in Write Zeroes: Not Supported 00:23:52.327 Deallocated Guard Field: 0xFFFF 00:23:52.327 Flush: Supported 00:23:52.327 Reservation: Not Supported 00:23:52.327 Namespace Sharing Capabilities: Multiple Controllers 00:23:52.327 Size (in LBAs): 262144 (1GiB) 00:23:52.327 Capacity (in LBAs): 262144 (1GiB) 00:23:52.327 Utilization (in LBAs): 262144 (1GiB) 00:23:52.327 Thin Provisioning: Not Supported 00:23:52.327 Per-NS Atomic Units: No 00:23:52.327 Maximum Single Source Range Length: 128 00:23:52.327 Maximum Copy Length: 128 00:23:52.327 Maximum Source Range Count: 128 00:23:52.327 NGUID/EUI64 Never Reused: No 00:23:52.327 Namespace Write Protected: No 00:23:52.327 Endurance group ID: 1 00:23:52.327 Number of LBA Formats: 8 00:23:52.327 Current LBA Format: LBA Format #04 00:23:52.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.327 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:52.327 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:52.327 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:52.327 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:52.327 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:52.327 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:52.327 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:52.327 00:23:52.327 Get Feature FDP: 00:23:52.327 ================ 00:23:52.327 Enabled: Yes 00:23:52.327 FDP configuration index: 0 00:23:52.327 00:23:52.327 FDP configurations log page 00:23:52.327 =========================== 00:23:52.327 Number of FDP configurations: 1 00:23:52.327 Version: 0 00:23:52.327 Size: 112 00:23:52.327 FDP Configuration Descriptor: 0 00:23:52.327 Descriptor Size: 96 00:23:52.327 Reclaim Group Identifier format: 2 00:23:52.327 FDP Volatile Write Cache: Not Present 00:23:52.327 FDP Configuration: Valid 00:23:52.327 Vendor Specific Size: 0 00:23:52.327 Number of Reclaim Groups: 2 00:23:52.327 Number of Recalim Unit Handles: 8 00:23:52.327 Max Placement Identifiers: 128 00:23:52.327 Number of Namespaces Suppprted: 256 00:23:52.327 Reclaim unit Nominal Size: 6000000 bytes 00:23:52.327 Estimated Reclaim Unit Time Limit: Not Reported 00:23:52.327 RUH Desc #000: RUH Type: Initially Isolated 00:23:52.327 RUH Desc #001: RUH Type: Initially Isolated 00:23:52.327 RUH Desc #002: RUH Type: Initially Isolated 00:23:52.327 RUH Desc #003: RUH Type: Initially Isolated 00:23:52.327 RUH Desc #004: RUH Type: Initially Isolated 00:23:52.327 RUH Desc #005: RUH Type: Initially Isolated 00:23:52.327 RUH Desc #006: RUH Type: Initially Isolated 00:23:52.327 RUH Desc #007: RUH Type: Initially Isolated 00:23:52.327 00:23:52.327 FDP reclaim unit handle usage log page 00:23:52.327 ====================================== 00:23:52.327 Number of Reclaim Unit Handles: 8 00:23:52.327 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:23:52.327 RUH Usage Desc #001: RUH Attributes: Unused 00:23:52.327 RUH Usage Desc #002: RUH Attributes: Unused 00:23:52.327 RUH Usage Desc #003: RUH Attributes: Unused 00:23:52.327 RUH Usage Desc #004: RUH Attributes: Unused 00:23:52.327 RUH Usage Desc #005: RUH Attributes: Unused 00:23:52.327 RUH Usage Desc #006: RUH Attributes: Unused 00:23:52.327 RUH Usage Desc #007: RUH Attributes: Unused 00:23:52.327 00:23:52.327 FDP statistics log page 00:23:52.327 ======================= 00:23:52.327 Host bytes with metadata written: 425304064 00:23:52.327 Media[2024-11-20 07:21:16.482312] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64876 terminated unexpected 00:23:52.327 bytes with metadata written: 425349120 00:23:52.327 Media bytes erased: 0 00:23:52.327 00:23:52.327 FDP events log page 00:23:52.327 =================== 00:23:52.327 Number of FDP events: 0 00:23:52.327 00:23:52.327 NVM Specific Namespace Data 00:23:52.327 =========================== 00:23:52.327 Logical Block Storage Tag Mask: 0 00:23:52.327 Protection Information Capabilities: 00:23:52.327 16b Guard Protection Information Storage Tag Support: No 00:23:52.327 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:52.327 Storage Tag Check Read Support: No 00:23:52.327 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.327 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.327 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.327 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.327 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.327 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.327 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.327 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.327 ===================================================== 00:23:52.327 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:52.327 ===================================================== 00:23:52.327 Controller Capabilities/Features 00:23:52.327 ================================ 00:23:52.327 Vendor ID: 1b36 00:23:52.327 Subsystem Vendor ID: 1af4 00:23:52.327 Serial Number: 12342 00:23:52.327 Model Number: QEMU NVMe Ctrl 00:23:52.327 Firmware Version: 8.0.0 00:23:52.327 Recommended Arb Burst: 6 00:23:52.327 IEEE OUI Identifier: 00 54 52 00:23:52.327 Multi-path I/O 00:23:52.327 May have multiple subsystem ports: No 00:23:52.327 May have multiple controllers: No 00:23:52.327 Associated with SR-IOV VF: No 00:23:52.327 Max Data Transfer Size: 524288 00:23:52.327 Max Number of Namespaces: 256 00:23:52.327 Max Number of I/O Queues: 64 00:23:52.327 NVMe Specification Version (VS): 1.4 00:23:52.327 NVMe Specification Version (Identify): 1.4 00:23:52.327 Maximum Queue Entries: 2048 00:23:52.327 Contiguous Queues Required: Yes 00:23:52.327 Arbitration Mechanisms Supported 00:23:52.327 Weighted Round Robin: Not Supported 00:23:52.327 Vendor Specific: Not Supported 00:23:52.327 Reset Timeout: 7500 ms 00:23:52.327 Doorbell Stride: 4 bytes 00:23:52.328 NVM Subsystem Reset: Not Supported 00:23:52.328 Command Sets Supported 00:23:52.328 NVM Command Set: Supported 00:23:52.328 Boot Partition: Not Supported 00:23:52.328 Memory Page Size Minimum: 4096 bytes 00:23:52.328 Memory Page Size Maximum: 65536 bytes 00:23:52.328 Persistent Memory Region: Not Supported 00:23:52.328 Optional Asynchronous Events Supported 00:23:52.328 Namespace Attribute Notices: Supported 00:23:52.328 Firmware Activation Notices: Not Supported 00:23:52.328 ANA Change Notices: Not Supported 00:23:52.328 PLE Aggregate Log Change Notices: Not Supported 00:23:52.328 LBA Status Info Alert Notices: Not Supported 00:23:52.328 EGE Aggregate Log Change Notices: Not Supported 00:23:52.328 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.328 Zone Descriptor Change Notices: Not Supported 00:23:52.328 Discovery Log Change Notices: Not Supported 00:23:52.328 Controller Attributes 00:23:52.328 128-bit Host Identifier: Not Supported 00:23:52.328 Non-Operational Permissive Mode: Not Supported 00:23:52.328 NVM Sets: Not Supported 00:23:52.328 Read Recovery Levels: Not Supported 00:23:52.328 Endurance Groups: Not Supported 00:23:52.328 Predictable Latency Mode: Not Supported 00:23:52.328 Traffic Based Keep ALive: Not Supported 00:23:52.328 Namespace Granularity: Not Supported 00:23:52.328 SQ Associations: Not Supported 00:23:52.328 UUID List: Not Supported 00:23:52.328 Multi-Domain Subsystem: Not Supported 00:23:52.328 Fixed Capacity Management: Not Supported 00:23:52.328 Variable Capacity Management: Not Supported 00:23:52.328 Delete Endurance Group: Not Supported 00:23:52.328 Delete NVM Set: Not Supported 00:23:52.328 Extended LBA Formats Supported: Supported 00:23:52.328 Flexible Data Placement Supported: Not Supported 00:23:52.328 00:23:52.328 Controller Memory Buffer Support 00:23:52.328 ================================ 00:23:52.328 Supported: No 00:23:52.328 00:23:52.328 Persistent Memory Region Support 00:23:52.328 ================================ 00:23:52.328 Supported: No 00:23:52.328 00:23:52.328 Admin Command Set Attributes 00:23:52.328 ============================ 00:23:52.328 Security Send/Receive: Not Supported 00:23:52.328 Format NVM: Supported 00:23:52.328 Firmware Activate/Download: Not Supported 00:23:52.328 Namespace Management: Supported 00:23:52.328 Device Self-Test: Not Supported 00:23:52.328 Directives: Supported 00:23:52.328 NVMe-MI: Not Supported 00:23:52.328 Virtualization Management: Not Supported 00:23:52.328 Doorbell Buffer Config: Supported 00:23:52.328 Get LBA Status Capability: Not Supported 00:23:52.328 Command & Feature Lockdown Capability: Not Supported 00:23:52.328 Abort Command Limit: 4 00:23:52.328 Async Event Request Limit: 4 00:23:52.328 Number of Firmware Slots: N/A 00:23:52.328 Firmware Slot 1 Read-Only: N/A 00:23:52.328 Firmware Activation Without Reset: N/A 00:23:52.328 Multiple Update Detection Support: N/A 00:23:52.328 Firmware Update Granularity: No Information Provided 00:23:52.328 Per-Namespace SMART Log: Yes 00:23:52.328 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.328 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:23:52.328 Command Effects Log Page: Supported 00:23:52.328 Get Log Page Extended Data: Supported 00:23:52.328 Telemetry Log Pages: Not Supported 00:23:52.328 Persistent Event Log Pages: Not Supported 00:23:52.328 Supported Log Pages Log Page: May Support 00:23:52.328 Commands Supported & Effects Log Page: Not Supported 00:23:52.328 Feature Identifiers & Effects Log Page:May Support 00:23:52.328 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.328 Data Area 4 for Telemetry Log: Not Supported 00:23:52.328 Error Log Page Entries Supported: 1 00:23:52.328 Keep Alive: Not Supported 00:23:52.328 00:23:52.328 NVM Command Set Attributes 00:23:52.328 ========================== 00:23:52.328 Submission Queue Entry Size 00:23:52.328 Max: 64 00:23:52.328 Min: 64 00:23:52.328 Completion Queue Entry Size 00:23:52.328 Max: 16 00:23:52.328 Min: 16 00:23:52.328 Number of Namespaces: 256 00:23:52.328 Compare Command: Supported 00:23:52.328 Write Uncorrectable Command: Not Supported 00:23:52.328 Dataset Management Command: Supported 00:23:52.328 Write Zeroes Command: Supported 00:23:52.328 Set Features Save Field: Supported 00:23:52.328 Reservations: Not Supported 00:23:52.328 Timestamp: Supported 00:23:52.328 Copy: Supported 00:23:52.328 Volatile Write Cache: Present 00:23:52.328 Atomic Write Unit (Normal): 1 00:23:52.328 Atomic Write Unit (PFail): 1 00:23:52.328 Atomic Compare & Write Unit: 1 00:23:52.328 Fused Compare & Write: Not Supported 00:23:52.328 Scatter-Gather List 00:23:52.328 SGL Command Set: Supported 00:23:52.328 SGL Keyed: Not Supported 00:23:52.328 SGL Bit Bucket Descriptor: Not Supported 00:23:52.328 SGL Metadata Pointer: Not Supported 00:23:52.328 Oversized SGL: Not Supported 00:23:52.328 SGL Metadata Address: Not Supported 00:23:52.328 SGL Offset: Not Supported 00:23:52.328 Transport SGL Data Block: Not Supported 00:23:52.328 Replay Protected Memory Block: Not Supported 00:23:52.328 00:23:52.328 Firmware Slot Information 00:23:52.328 ========================= 00:23:52.328 Active slot: 1 00:23:52.328 Slot 1 Firmware Revision: 1.0 00:23:52.328 00:23:52.328 00:23:52.328 Commands Supported and Effects 00:23:52.328 ============================== 00:23:52.328 Admin Commands 00:23:52.328 -------------- 00:23:52.328 Delete I/O Submission Queue (00h): Supported 00:23:52.328 Create I/O Submission Queue (01h): Supported 00:23:52.328 Get Log Page (02h): Supported 00:23:52.328 Delete I/O Completion Queue (04h): Supported 00:23:52.328 Create I/O Completion Queue (05h): Supported 00:23:52.328 Identify (06h): Supported 00:23:52.328 Abort (08h): Supported 00:23:52.328 Set Features (09h): Supported 00:23:52.329 Get Features (0Ah): Supported 00:23:52.329 Asynchronous Event Request (0Ch): Supported 00:23:52.329 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:52.329 Directive Send (19h): Supported 00:23:52.329 Directive Receive (1Ah): Supported 00:23:52.329 Virtualization Management (1Ch): Supported 00:23:52.329 Doorbell Buffer Config (7Ch): Supported 00:23:52.329 Format NVM (80h): Supported LBA-Change 00:23:52.329 I/O Commands 00:23:52.329 ------------ 00:23:52.329 Flush (00h): Supported LBA-Change 00:23:52.329 Write (01h): Supported LBA-Change 00:23:52.329 Read (02h): Supported 00:23:52.329 Compare (05h): Supported 00:23:52.329 Write Zeroes (08h): Supported LBA-Change 00:23:52.329 Dataset Management (09h): Supported LBA-Change 00:23:52.329 Unknown (0Ch): Supported 00:23:52.329 Unknown (12h): Supported 00:23:52.329 Copy (19h): Supported LBA-Change 00:23:52.329 Unknown (1Dh): Supported LBA-Change 00:23:52.329 00:23:52.329 Error Log 00:23:52.329 ========= 00:23:52.329 00:23:52.329 Arbitration 00:23:52.329 =========== 00:23:52.329 Arbitration Burst: no limit 00:23:52.329 00:23:52.329 Power Management 00:23:52.329 ================ 00:23:52.329 Number of Power States: 1 00:23:52.329 Current Power State: Power State #0 00:23:52.329 Power State #0: 00:23:52.329 Max Power: 25.00 W 00:23:52.329 Non-Operational State: Operational 00:23:52.329 Entry Latency: 16 microseconds 00:23:52.329 Exit Latency: 4 microseconds 00:23:52.329 Relative Read Throughput: 0 00:23:52.329 Relative Read Latency: 0 00:23:52.329 Relative Write Throughput: 0 00:23:52.329 Relative Write Latency: 0 00:23:52.329 Idle Power: Not Reported 00:23:52.329 Active Power: Not Reported 00:23:52.329 Non-Operational Permissive Mode: Not Supported 00:23:52.329 00:23:52.329 Health Information 00:23:52.329 ================== 00:23:52.329 Critical Warnings: 00:23:52.329 Available Spare Space: OK 00:23:52.329 Temperature: OK 00:23:52.329 Device Reliability: OK 00:23:52.329 Read Only: No 00:23:52.329 Volatile Memory Backup: OK 00:23:52.329 Current Temperature: 323 Kelvin (50 Celsius) 00:23:52.329 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:52.329 Available Spare: 0% 00:23:52.329 Available Spare Threshold: 0% 00:23:52.329 Life Percentage Used: 0% 00:23:52.329 Data Units Read: 2054 00:23:52.329 Data Units Written: 1842 00:23:52.329 Host Read Commands: 94770 00:23:52.329 Host Write Commands: 93039 00:23:52.329 Controller Busy Time: 0 minutes 00:23:52.329 Power Cycles: 0 00:23:52.329 Power On Hours: 0 hours 00:23:52.329 Unsafe Shutdowns: 0 00:23:52.329 Unrecoverable Media Errors: 0 00:23:52.329 Lifetime Error Log Entries: 0 00:23:52.329 Warning Temperature Time: 0 minutes 00:23:52.329 Critical Temperature Time: 0 minutes 00:23:52.329 00:23:52.329 Number of Queues 00:23:52.329 ================ 00:23:52.329 Number of I/O Submission Queues: 64 00:23:52.329 Number of I/O Completion Queues: 64 00:23:52.329 00:23:52.329 ZNS Specific Controller Data 00:23:52.329 ============================ 00:23:52.329 Zone Append Size Limit: 0 00:23:52.329 00:23:52.329 00:23:52.329 Active Namespaces 00:23:52.329 ================= 00:23:52.329 Namespace ID:1 00:23:52.329 Error Recovery Timeout: Unlimited 00:23:52.329 Command Set Identifier: NVM (00h) 00:23:52.329 Deallocate: Supported 00:23:52.329 Deallocated/Unwritten Error: Supported 00:23:52.329 Deallocated Read Value: All 0x00 00:23:52.329 Deallocate in Write Zeroes: Not Supported 00:23:52.329 Deallocated Guard Field: 0xFFFF 00:23:52.329 Flush: Supported 00:23:52.329 Reservation: Not Supported 00:23:52.329 Namespace Sharing Capabilities: Private 00:23:52.329 Size (in LBAs): 1048576 (4GiB) 00:23:52.329 Capacity (in LBAs): 1048576 (4GiB) 00:23:52.329 Utilization (in LBAs): 1048576 (4GiB) 00:23:52.329 Thin Provisioning: Not Supported 00:23:52.329 Per-NS Atomic Units: No 00:23:52.329 Maximum Single Source Range Length: 128 00:23:52.329 Maximum Copy Length: 128 00:23:52.329 Maximum Source Range Count: 128 00:23:52.329 NGUID/EUI64 Never Reused: No 00:23:52.329 Namespace Write Protected: No 00:23:52.329 Number of LBA Formats: 8 00:23:52.329 Current LBA Format: LBA Format #04 00:23:52.329 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.329 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:52.329 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:52.329 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:52.329 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:52.329 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:52.329 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:52.329 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:52.329 00:23:52.329 NVM Specific Namespace Data 00:23:52.329 =========================== 00:23:52.329 Logical Block Storage Tag Mask: 0 00:23:52.329 Protection Information Capabilities: 00:23:52.329 16b Guard Protection Information Storage Tag Support: No 00:23:52.329 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:52.329 Storage Tag Check Read Support: No 00:23:52.329 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.329 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.329 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.329 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.329 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.329 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Namespace ID:2 00:23:52.330 Error Recovery Timeout: Unlimited 00:23:52.330 Command Set Identifier: NVM (00h) 00:23:52.330 Deallocate: Supported 00:23:52.330 Deallocated/Unwritten Error: Supported 00:23:52.330 Deallocated Read Value: All 0x00 00:23:52.330 Deallocate in Write Zeroes: Not Supported 00:23:52.330 Deallocated Guard Field: 0xFFFF 00:23:52.330 Flush: Supported 00:23:52.330 Reservation: Not Supported 00:23:52.330 Namespace Sharing Capabilities: Private 00:23:52.330 Size (in LBAs): 1048576 (4GiB) 00:23:52.330 Capacity (in LBAs): 1048576 (4GiB) 00:23:52.330 Utilization (in LBAs): 1048576 (4GiB) 00:23:52.330 Thin Provisioning: Not Supported 00:23:52.330 Per-NS Atomic Units: No 00:23:52.330 Maximum Single Source Range Length: 128 00:23:52.330 Maximum Copy Length: 128 00:23:52.330 Maximum Source Range Count: 128 00:23:52.330 NGUID/EUI64 Never Reused: No 00:23:52.330 Namespace Write Protected: No 00:23:52.330 Number of LBA Formats: 8 00:23:52.330 Current LBA Format: LBA Format #04 00:23:52.330 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.330 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:52.330 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:52.330 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:52.330 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:52.330 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:52.330 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:52.330 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:52.330 00:23:52.330 NVM Specific Namespace Data 00:23:52.330 =========================== 00:23:52.330 Logical Block Storage Tag Mask: 0 00:23:52.330 Protection Information Capabilities: 00:23:52.330 16b Guard Protection Information Storage Tag Support: No 00:23:52.330 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:52.330 Storage Tag Check Read Support: No 00:23:52.330 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.330 Namespace ID:3 00:23:52.330 Error Recovery Timeout: Unlimited 00:23:52.330 Command Set Identifier: NVM (00h) 00:23:52.330 Deallocate: Supported 00:23:52.330 Deallocated/Unwritten Error: Supported 00:23:52.330 Deallocated Read Value: All 0x00 00:23:52.330 Deallocate in Write Zeroes: Not Supported 00:23:52.330 Deallocated Guard Field: 0xFFFF 00:23:52.330 Flush: Supported 00:23:52.330 Reservation: Not Supported 00:23:52.330 Namespace Sharing Capabilities: Private 00:23:52.330 Size (in LBAs): 1048576 (4GiB) 00:23:52.588 Capacity (in LBAs): 1048576 (4GiB) 00:23:52.588 Utilization (in LBAs): 1048576 (4GiB) 00:23:52.588 Thin Provisioning: Not Supported 00:23:52.588 Per-NS Atomic Units: No 00:23:52.588 Maximum Single Source Range Length: 128 00:23:52.588 Maximum Copy Length: 128 00:23:52.588 Maximum Source Range Count: 128 00:23:52.588 NGUID/EUI64 Never Reused: No 00:23:52.588 Namespace Write Protected: No 00:23:52.588 Number of LBA Formats: 8 00:23:52.588 Current LBA Format: LBA Format #04 00:23:52.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.588 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:52.588 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:52.588 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:52.588 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:52.588 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:52.588 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:52.588 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:52.588 00:23:52.588 NVM Specific Namespace Data 00:23:52.588 =========================== 00:23:52.588 Logical Block Storage Tag Mask: 0 00:23:52.588 Protection Information Capabilities: 00:23:52.588 16b Guard Protection Information Storage Tag Support: No 00:23:52.588 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:52.588 Storage Tag Check Read Support: No 00:23:52.588 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.588 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.588 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.588 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.588 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.588 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.588 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.588 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.588 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:23:52.588 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:23:52.847 ===================================================== 00:23:52.847 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:52.847 ===================================================== 00:23:52.847 Controller Capabilities/Features 00:23:52.847 ================================ 00:23:52.847 Vendor ID: 1b36 00:23:52.847 Subsystem Vendor ID: 1af4 00:23:52.847 Serial Number: 12340 00:23:52.847 Model Number: QEMU NVMe Ctrl 00:23:52.847 Firmware Version: 8.0.0 00:23:52.847 Recommended Arb Burst: 6 00:23:52.847 IEEE OUI Identifier: 00 54 52 00:23:52.847 Multi-path I/O 00:23:52.847 May have multiple subsystem ports: No 00:23:52.847 May have multiple controllers: No 00:23:52.847 Associated with SR-IOV VF: No 00:23:52.847 Max Data Transfer Size: 524288 00:23:52.847 Max Number of Namespaces: 256 00:23:52.847 Max Number of I/O Queues: 64 00:23:52.847 NVMe Specification Version (VS): 1.4 00:23:52.847 NVMe Specification Version (Identify): 1.4 00:23:52.847 Maximum Queue Entries: 2048 00:23:52.847 Contiguous Queues Required: Yes 00:23:52.847 Arbitration Mechanisms Supported 00:23:52.847 Weighted Round Robin: Not Supported 00:23:52.847 Vendor Specific: Not Supported 00:23:52.847 Reset Timeout: 7500 ms 00:23:52.847 Doorbell Stride: 4 bytes 00:23:52.847 NVM Subsystem Reset: Not Supported 00:23:52.847 Command Sets Supported 00:23:52.847 NVM Command Set: Supported 00:23:52.847 Boot Partition: Not Supported 00:23:52.847 Memory Page Size Minimum: 4096 bytes 00:23:52.847 Memory Page Size Maximum: 65536 bytes 00:23:52.847 Persistent Memory Region: Not Supported 00:23:52.847 Optional Asynchronous Events Supported 00:23:52.847 Namespace Attribute Notices: Supported 00:23:52.847 Firmware Activation Notices: Not Supported 00:23:52.847 ANA Change Notices: Not Supported 00:23:52.847 PLE Aggregate Log Change Notices: Not Supported 00:23:52.847 LBA Status Info Alert Notices: Not Supported 00:23:52.847 EGE Aggregate Log Change Notices: Not Supported 00:23:52.847 Normal NVM Subsystem Shutdown event: Not Supported 00:23:52.847 Zone Descriptor Change Notices: Not Supported 00:23:52.847 Discovery Log Change Notices: Not Supported 00:23:52.847 Controller Attributes 00:23:52.847 128-bit Host Identifier: Not Supported 00:23:52.847 Non-Operational Permissive Mode: Not Supported 00:23:52.847 NVM Sets: Not Supported 00:23:52.847 Read Recovery Levels: Not Supported 00:23:52.847 Endurance Groups: Not Supported 00:23:52.847 Predictable Latency Mode: Not Supported 00:23:52.847 Traffic Based Keep ALive: Not Supported 00:23:52.847 Namespace Granularity: Not Supported 00:23:52.847 SQ Associations: Not Supported 00:23:52.847 UUID List: Not Supported 00:23:52.847 Multi-Domain Subsystem: Not Supported 00:23:52.847 Fixed Capacity Management: Not Supported 00:23:52.847 Variable Capacity Management: Not Supported 00:23:52.847 Delete Endurance Group: Not Supported 00:23:52.847 Delete NVM Set: Not Supported 00:23:52.847 Extended LBA Formats Supported: Supported 00:23:52.847 Flexible Data Placement Supported: Not Supported 00:23:52.847 00:23:52.847 Controller Memory Buffer Support 00:23:52.847 ================================ 00:23:52.847 Supported: No 00:23:52.847 00:23:52.847 Persistent Memory Region Support 00:23:52.847 ================================ 00:23:52.847 Supported: No 00:23:52.847 00:23:52.847 Admin Command Set Attributes 00:23:52.847 ============================ 00:23:52.847 Security Send/Receive: Not Supported 00:23:52.847 Format NVM: Supported 00:23:52.847 Firmware Activate/Download: Not Supported 00:23:52.847 Namespace Management: Supported 00:23:52.847 Device Self-Test: Not Supported 00:23:52.847 Directives: Supported 00:23:52.847 NVMe-MI: Not Supported 00:23:52.847 Virtualization Management: Not Supported 00:23:52.847 Doorbell Buffer Config: Supported 00:23:52.847 Get LBA Status Capability: Not Supported 00:23:52.847 Command & Feature Lockdown Capability: Not Supported 00:23:52.847 Abort Command Limit: 4 00:23:52.847 Async Event Request Limit: 4 00:23:52.847 Number of Firmware Slots: N/A 00:23:52.847 Firmware Slot 1 Read-Only: N/A 00:23:52.847 Firmware Activation Without Reset: N/A 00:23:52.847 Multiple Update Detection Support: N/A 00:23:52.847 Firmware Update Granularity: No Information Provided 00:23:52.847 Per-Namespace SMART Log: Yes 00:23:52.847 Asymmetric Namespace Access Log Page: Not Supported 00:23:52.847 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:23:52.847 Command Effects Log Page: Supported 00:23:52.847 Get Log Page Extended Data: Supported 00:23:52.847 Telemetry Log Pages: Not Supported 00:23:52.847 Persistent Event Log Pages: Not Supported 00:23:52.847 Supported Log Pages Log Page: May Support 00:23:52.847 Commands Supported & Effects Log Page: Not Supported 00:23:52.847 Feature Identifiers & Effects Log Page:May Support 00:23:52.847 NVMe-MI Commands & Effects Log Page: May Support 00:23:52.847 Data Area 4 for Telemetry Log: Not Supported 00:23:52.847 Error Log Page Entries Supported: 1 00:23:52.847 Keep Alive: Not Supported 00:23:52.847 00:23:52.847 NVM Command Set Attributes 00:23:52.847 ========================== 00:23:52.847 Submission Queue Entry Size 00:23:52.847 Max: 64 00:23:52.847 Min: 64 00:23:52.847 Completion Queue Entry Size 00:23:52.847 Max: 16 00:23:52.847 Min: 16 00:23:52.847 Number of Namespaces: 256 00:23:52.847 Compare Command: Supported 00:23:52.847 Write Uncorrectable Command: Not Supported 00:23:52.847 Dataset Management Command: Supported 00:23:52.847 Write Zeroes Command: Supported 00:23:52.847 Set Features Save Field: Supported 00:23:52.847 Reservations: Not Supported 00:23:52.847 Timestamp: Supported 00:23:52.847 Copy: Supported 00:23:52.847 Volatile Write Cache: Present 00:23:52.847 Atomic Write Unit (Normal): 1 00:23:52.847 Atomic Write Unit (PFail): 1 00:23:52.847 Atomic Compare & Write Unit: 1 00:23:52.847 Fused Compare & Write: Not Supported 00:23:52.847 Scatter-Gather List 00:23:52.847 SGL Command Set: Supported 00:23:52.847 SGL Keyed: Not Supported 00:23:52.847 SGL Bit Bucket Descriptor: Not Supported 00:23:52.847 SGL Metadata Pointer: Not Supported 00:23:52.847 Oversized SGL: Not Supported 00:23:52.847 SGL Metadata Address: Not Supported 00:23:52.847 SGL Offset: Not Supported 00:23:52.847 Transport SGL Data Block: Not Supported 00:23:52.848 Replay Protected Memory Block: Not Supported 00:23:52.848 00:23:52.848 Firmware Slot Information 00:23:52.848 ========================= 00:23:52.848 Active slot: 1 00:23:52.848 Slot 1 Firmware Revision: 1.0 00:23:52.848 00:23:52.848 00:23:52.848 Commands Supported and Effects 00:23:52.848 ============================== 00:23:52.848 Admin Commands 00:23:52.848 -------------- 00:23:52.848 Delete I/O Submission Queue (00h): Supported 00:23:52.848 Create I/O Submission Queue (01h): Supported 00:23:52.848 Get Log Page (02h): Supported 00:23:52.848 Delete I/O Completion Queue (04h): Supported 00:23:52.848 Create I/O Completion Queue (05h): Supported 00:23:52.848 Identify (06h): Supported 00:23:52.848 Abort (08h): Supported 00:23:52.848 Set Features (09h): Supported 00:23:52.848 Get Features (0Ah): Supported 00:23:52.848 Asynchronous Event Request (0Ch): Supported 00:23:52.848 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:52.848 Directive Send (19h): Supported 00:23:52.848 Directive Receive (1Ah): Supported 00:23:52.848 Virtualization Management (1Ch): Supported 00:23:52.848 Doorbell Buffer Config (7Ch): Supported 00:23:52.848 Format NVM (80h): Supported LBA-Change 00:23:52.848 I/O Commands 00:23:52.848 ------------ 00:23:52.848 Flush (00h): Supported LBA-Change 00:23:52.848 Write (01h): Supported LBA-Change 00:23:52.848 Read (02h): Supported 00:23:52.848 Compare (05h): Supported 00:23:52.848 Write Zeroes (08h): Supported LBA-Change 00:23:52.848 Dataset Management (09h): Supported LBA-Change 00:23:52.848 Unknown (0Ch): Supported 00:23:52.848 Unknown (12h): Supported 00:23:52.848 Copy (19h): Supported LBA-Change 00:23:52.848 Unknown (1Dh): Supported LBA-Change 00:23:52.848 00:23:52.848 Error Log 00:23:52.848 ========= 00:23:52.848 00:23:52.848 Arbitration 00:23:52.848 =========== 00:23:52.848 Arbitration Burst: no limit 00:23:52.848 00:23:52.848 Power Management 00:23:52.848 ================ 00:23:52.848 Number of Power States: 1 00:23:52.848 Current Power State: Power State #0 00:23:52.848 Power State #0: 00:23:52.848 Max Power: 25.00 W 00:23:52.848 Non-Operational State: Operational 00:23:52.848 Entry Latency: 16 microseconds 00:23:52.848 Exit Latency: 4 microseconds 00:23:52.848 Relative Read Throughput: 0 00:23:52.848 Relative Read Latency: 0 00:23:52.848 Relative Write Throughput: 0 00:23:52.848 Relative Write Latency: 0 00:23:52.848 Idle Power: Not Reported 00:23:52.848 Active Power: Not Reported 00:23:52.848 Non-Operational Permissive Mode: Not Supported 00:23:52.848 00:23:52.848 Health Information 00:23:52.848 ================== 00:23:52.848 Critical Warnings: 00:23:52.848 Available Spare Space: OK 00:23:52.848 Temperature: OK 00:23:52.848 Device Reliability: OK 00:23:52.848 Read Only: No 00:23:52.848 Volatile Memory Backup: OK 00:23:52.848 Current Temperature: 323 Kelvin (50 Celsius) 00:23:52.848 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:52.848 Available Spare: 0% 00:23:52.848 Available Spare Threshold: 0% 00:23:52.848 Life Percentage Used: 0% 00:23:52.848 Data Units Read: 656 00:23:52.848 Data Units Written: 584 00:23:52.848 Host Read Commands: 31123 00:23:52.848 Host Write Commands: 30909 00:23:52.848 Controller Busy Time: 0 minutes 00:23:52.848 Power Cycles: 0 00:23:52.848 Power On Hours: 0 hours 00:23:52.848 Unsafe Shutdowns: 0 00:23:52.848 Unrecoverable Media Errors: 0 00:23:52.848 Lifetime Error Log Entries: 0 00:23:52.848 Warning Temperature Time: 0 minutes 00:23:52.848 Critical Temperature Time: 0 minutes 00:23:52.848 00:23:52.848 Number of Queues 00:23:52.848 ================ 00:23:52.848 Number of I/O Submission Queues: 64 00:23:52.848 Number of I/O Completion Queues: 64 00:23:52.848 00:23:52.848 ZNS Specific Controller Data 00:23:52.848 ============================ 00:23:52.848 Zone Append Size Limit: 0 00:23:52.848 00:23:52.848 00:23:52.848 Active Namespaces 00:23:52.848 ================= 00:23:52.848 Namespace ID:1 00:23:52.848 Error Recovery Timeout: Unlimited 00:23:52.848 Command Set Identifier: NVM (00h) 00:23:52.848 Deallocate: Supported 00:23:52.848 Deallocated/Unwritten Error: Supported 00:23:52.848 Deallocated Read Value: All 0x00 00:23:52.848 Deallocate in Write Zeroes: Not Supported 00:23:52.848 Deallocated Guard Field: 0xFFFF 00:23:52.848 Flush: Supported 00:23:52.848 Reservation: Not Supported 00:23:52.848 Metadata Transferred as: Separate Metadata Buffer 00:23:52.848 Namespace Sharing Capabilities: Private 00:23:52.848 Size (in LBAs): 1548666 (5GiB) 00:23:52.848 Capacity (in LBAs): 1548666 (5GiB) 00:23:52.848 Utilization (in LBAs): 1548666 (5GiB) 00:23:52.848 Thin Provisioning: Not Supported 00:23:52.848 Per-NS Atomic Units: No 00:23:52.848 Maximum Single Source Range Length: 128 00:23:52.848 Maximum Copy Length: 128 00:23:52.848 Maximum Source Range Count: 128 00:23:52.848 NGUID/EUI64 Never Reused: No 00:23:52.848 Namespace Write Protected: No 00:23:52.848 Number of LBA Formats: 8 00:23:52.848 Current LBA Format: LBA Format #07 00:23:52.848 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:52.848 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:52.848 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:52.848 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:52.848 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:52.848 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:52.848 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:52.848 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:52.848 00:23:52.848 NVM Specific Namespace Data 00:23:52.848 =========================== 00:23:52.848 Logical Block Storage Tag Mask: 0 00:23:52.848 Protection Information Capabilities: 00:23:52.848 16b Guard Protection Information Storage Tag Support: No 00:23:52.848 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:52.848 Storage Tag Check Read Support: No 00:23:52.848 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.848 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.848 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.848 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.848 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.848 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.848 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.848 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:52.848 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:23:52.848 07:21:16 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:23:53.415 ===================================================== 00:23:53.415 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:53.415 ===================================================== 00:23:53.415 Controller Capabilities/Features 00:23:53.415 ================================ 00:23:53.415 Vendor ID: 1b36 00:23:53.415 Subsystem Vendor ID: 1af4 00:23:53.415 Serial Number: 12341 00:23:53.415 Model Number: QEMU NVMe Ctrl 00:23:53.415 Firmware Version: 8.0.0 00:23:53.415 Recommended Arb Burst: 6 00:23:53.415 IEEE OUI Identifier: 00 54 52 00:23:53.415 Multi-path I/O 00:23:53.415 May have multiple subsystem ports: No 00:23:53.415 May have multiple controllers: No 00:23:53.415 Associated with SR-IOV VF: No 00:23:53.415 Max Data Transfer Size: 524288 00:23:53.415 Max Number of Namespaces: 256 00:23:53.415 Max Number of I/O Queues: 64 00:23:53.415 NVMe Specification Version (VS): 1.4 00:23:53.415 NVMe Specification Version (Identify): 1.4 00:23:53.415 Maximum Queue Entries: 2048 00:23:53.415 Contiguous Queues Required: Yes 00:23:53.415 Arbitration Mechanisms Supported 00:23:53.415 Weighted Round Robin: Not Supported 00:23:53.415 Vendor Specific: Not Supported 00:23:53.415 Reset Timeout: 7500 ms 00:23:53.415 Doorbell Stride: 4 bytes 00:23:53.416 NVM Subsystem Reset: Not Supported 00:23:53.416 Command Sets Supported 00:23:53.416 NVM Command Set: Supported 00:23:53.416 Boot Partition: Not Supported 00:23:53.416 Memory Page Size Minimum: 4096 bytes 00:23:53.416 Memory Page Size Maximum: 65536 bytes 00:23:53.416 Persistent Memory Region: Not Supported 00:23:53.416 Optional Asynchronous Events Supported 00:23:53.416 Namespace Attribute Notices: Supported 00:23:53.416 Firmware Activation Notices: Not Supported 00:23:53.416 ANA Change Notices: Not Supported 00:23:53.416 PLE Aggregate Log Change Notices: Not Supported 00:23:53.416 LBA Status Info Alert Notices: Not Supported 00:23:53.416 EGE Aggregate Log Change Notices: Not Supported 00:23:53.416 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.416 Zone Descriptor Change Notices: Not Supported 00:23:53.416 Discovery Log Change Notices: Not Supported 00:23:53.416 Controller Attributes 00:23:53.416 128-bit Host Identifier: Not Supported 00:23:53.416 Non-Operational Permissive Mode: Not Supported 00:23:53.416 NVM Sets: Not Supported 00:23:53.416 Read Recovery Levels: Not Supported 00:23:53.416 Endurance Groups: Not Supported 00:23:53.416 Predictable Latency Mode: Not Supported 00:23:53.416 Traffic Based Keep ALive: Not Supported 00:23:53.416 Namespace Granularity: Not Supported 00:23:53.416 SQ Associations: Not Supported 00:23:53.416 UUID List: Not Supported 00:23:53.416 Multi-Domain Subsystem: Not Supported 00:23:53.416 Fixed Capacity Management: Not Supported 00:23:53.416 Variable Capacity Management: Not Supported 00:23:53.416 Delete Endurance Group: Not Supported 00:23:53.416 Delete NVM Set: Not Supported 00:23:53.416 Extended LBA Formats Supported: Supported 00:23:53.416 Flexible Data Placement Supported: Not Supported 00:23:53.416 00:23:53.416 Controller Memory Buffer Support 00:23:53.416 ================================ 00:23:53.416 Supported: No 00:23:53.416 00:23:53.416 Persistent Memory Region Support 00:23:53.416 ================================ 00:23:53.416 Supported: No 00:23:53.416 00:23:53.416 Admin Command Set Attributes 00:23:53.416 ============================ 00:23:53.416 Security Send/Receive: Not Supported 00:23:53.416 Format NVM: Supported 00:23:53.416 Firmware Activate/Download: Not Supported 00:23:53.416 Namespace Management: Supported 00:23:53.416 Device Self-Test: Not Supported 00:23:53.416 Directives: Supported 00:23:53.416 NVMe-MI: Not Supported 00:23:53.416 Virtualization Management: Not Supported 00:23:53.416 Doorbell Buffer Config: Supported 00:23:53.416 Get LBA Status Capability: Not Supported 00:23:53.416 Command & Feature Lockdown Capability: Not Supported 00:23:53.416 Abort Command Limit: 4 00:23:53.416 Async Event Request Limit: 4 00:23:53.416 Number of Firmware Slots: N/A 00:23:53.416 Firmware Slot 1 Read-Only: N/A 00:23:53.416 Firmware Activation Without Reset: N/A 00:23:53.416 Multiple Update Detection Support: N/A 00:23:53.416 Firmware Update Granularity: No Information Provided 00:23:53.416 Per-Namespace SMART Log: Yes 00:23:53.416 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.416 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:23:53.416 Command Effects Log Page: Supported 00:23:53.416 Get Log Page Extended Data: Supported 00:23:53.416 Telemetry Log Pages: Not Supported 00:23:53.416 Persistent Event Log Pages: Not Supported 00:23:53.416 Supported Log Pages Log Page: May Support 00:23:53.416 Commands Supported & Effects Log Page: Not Supported 00:23:53.416 Feature Identifiers & Effects Log Page:May Support 00:23:53.416 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.416 Data Area 4 for Telemetry Log: Not Supported 00:23:53.416 Error Log Page Entries Supported: 1 00:23:53.416 Keep Alive: Not Supported 00:23:53.416 00:23:53.416 NVM Command Set Attributes 00:23:53.416 ========================== 00:23:53.416 Submission Queue Entry Size 00:23:53.416 Max: 64 00:23:53.416 Min: 64 00:23:53.416 Completion Queue Entry Size 00:23:53.416 Max: 16 00:23:53.416 Min: 16 00:23:53.416 Number of Namespaces: 256 00:23:53.416 Compare Command: Supported 00:23:53.416 Write Uncorrectable Command: Not Supported 00:23:53.416 Dataset Management Command: Supported 00:23:53.416 Write Zeroes Command: Supported 00:23:53.416 Set Features Save Field: Supported 00:23:53.416 Reservations: Not Supported 00:23:53.416 Timestamp: Supported 00:23:53.416 Copy: Supported 00:23:53.416 Volatile Write Cache: Present 00:23:53.416 Atomic Write Unit (Normal): 1 00:23:53.416 Atomic Write Unit (PFail): 1 00:23:53.416 Atomic Compare & Write Unit: 1 00:23:53.416 Fused Compare & Write: Not Supported 00:23:53.416 Scatter-Gather List 00:23:53.416 SGL Command Set: Supported 00:23:53.416 SGL Keyed: Not Supported 00:23:53.416 SGL Bit Bucket Descriptor: Not Supported 00:23:53.416 SGL Metadata Pointer: Not Supported 00:23:53.416 Oversized SGL: Not Supported 00:23:53.416 SGL Metadata Address: Not Supported 00:23:53.416 SGL Offset: Not Supported 00:23:53.416 Transport SGL Data Block: Not Supported 00:23:53.416 Replay Protected Memory Block: Not Supported 00:23:53.416 00:23:53.416 Firmware Slot Information 00:23:53.416 ========================= 00:23:53.416 Active slot: 1 00:23:53.416 Slot 1 Firmware Revision: 1.0 00:23:53.416 00:23:53.416 00:23:53.416 Commands Supported and Effects 00:23:53.416 ============================== 00:23:53.416 Admin Commands 00:23:53.416 -------------- 00:23:53.416 Delete I/O Submission Queue (00h): Supported 00:23:53.416 Create I/O Submission Queue (01h): Supported 00:23:53.416 Get Log Page (02h): Supported 00:23:53.416 Delete I/O Completion Queue (04h): Supported 00:23:53.417 Create I/O Completion Queue (05h): Supported 00:23:53.417 Identify (06h): Supported 00:23:53.417 Abort (08h): Supported 00:23:53.417 Set Features (09h): Supported 00:23:53.417 Get Features (0Ah): Supported 00:23:53.417 Asynchronous Event Request (0Ch): Supported 00:23:53.417 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:53.417 Directive Send (19h): Supported 00:23:53.417 Directive Receive (1Ah): Supported 00:23:53.417 Virtualization Management (1Ch): Supported 00:23:53.417 Doorbell Buffer Config (7Ch): Supported 00:23:53.417 Format NVM (80h): Supported LBA-Change 00:23:53.417 I/O Commands 00:23:53.417 ------------ 00:23:53.417 Flush (00h): Supported LBA-Change 00:23:53.417 Write (01h): Supported LBA-Change 00:23:53.417 Read (02h): Supported 00:23:53.417 Compare (05h): Supported 00:23:53.417 Write Zeroes (08h): Supported LBA-Change 00:23:53.417 Dataset Management (09h): Supported LBA-Change 00:23:53.417 Unknown (0Ch): Supported 00:23:53.417 Unknown (12h): Supported 00:23:53.417 Copy (19h): Supported LBA-Change 00:23:53.417 Unknown (1Dh): Supported LBA-Change 00:23:53.417 00:23:53.417 Error Log 00:23:53.417 ========= 00:23:53.417 00:23:53.417 Arbitration 00:23:53.417 =========== 00:23:53.417 Arbitration Burst: no limit 00:23:53.417 00:23:53.417 Power Management 00:23:53.417 ================ 00:23:53.417 Number of Power States: 1 00:23:53.417 Current Power State: Power State #0 00:23:53.417 Power State #0: 00:23:53.417 Max Power: 25.00 W 00:23:53.417 Non-Operational State: Operational 00:23:53.417 Entry Latency: 16 microseconds 00:23:53.417 Exit Latency: 4 microseconds 00:23:53.417 Relative Read Throughput: 0 00:23:53.417 Relative Read Latency: 0 00:23:53.417 Relative Write Throughput: 0 00:23:53.417 Relative Write Latency: 0 00:23:53.417 Idle Power: Not Reported 00:23:53.417 Active Power: Not Reported 00:23:53.417 Non-Operational Permissive Mode: Not Supported 00:23:53.417 00:23:53.417 Health Information 00:23:53.417 ================== 00:23:53.417 Critical Warnings: 00:23:53.417 Available Spare Space: OK 00:23:53.417 Temperature: OK 00:23:53.417 Device Reliability: OK 00:23:53.417 Read Only: No 00:23:53.417 Volatile Memory Backup: OK 00:23:53.417 Current Temperature: 323 Kelvin (50 Celsius) 00:23:53.417 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:53.417 Available Spare: 0% 00:23:53.417 Available Spare Threshold: 0% 00:23:53.417 Life Percentage Used: 0% 00:23:53.417 Data Units Read: 976 00:23:53.417 Data Units Written: 843 00:23:53.417 Host Read Commands: 46456 00:23:53.417 Host Write Commands: 45235 00:23:53.417 Controller Busy Time: 0 minutes 00:23:53.417 Power Cycles: 0 00:23:53.417 Power On Hours: 0 hours 00:23:53.417 Unsafe Shutdowns: 0 00:23:53.417 Unrecoverable Media Errors: 0 00:23:53.417 Lifetime Error Log Entries: 0 00:23:53.417 Warning Temperature Time: 0 minutes 00:23:53.417 Critical Temperature Time: 0 minutes 00:23:53.417 00:23:53.417 Number of Queues 00:23:53.417 ================ 00:23:53.417 Number of I/O Submission Queues: 64 00:23:53.417 Number of I/O Completion Queues: 64 00:23:53.417 00:23:53.417 ZNS Specific Controller Data 00:23:53.417 ============================ 00:23:53.417 Zone Append Size Limit: 0 00:23:53.417 00:23:53.417 00:23:53.417 Active Namespaces 00:23:53.417 ================= 00:23:53.417 Namespace ID:1 00:23:53.417 Error Recovery Timeout: Unlimited 00:23:53.417 Command Set Identifier: NVM (00h) 00:23:53.417 Deallocate: Supported 00:23:53.417 Deallocated/Unwritten Error: Supported 00:23:53.417 Deallocated Read Value: All 0x00 00:23:53.417 Deallocate in Write Zeroes: Not Supported 00:23:53.417 Deallocated Guard Field: 0xFFFF 00:23:53.417 Flush: Supported 00:23:53.417 Reservation: Not Supported 00:23:53.417 Namespace Sharing Capabilities: Private 00:23:53.417 Size (in LBAs): 1310720 (5GiB) 00:23:53.417 Capacity (in LBAs): 1310720 (5GiB) 00:23:53.417 Utilization (in LBAs): 1310720 (5GiB) 00:23:53.417 Thin Provisioning: Not Supported 00:23:53.417 Per-NS Atomic Units: No 00:23:53.417 Maximum Single Source Range Length: 128 00:23:53.417 Maximum Copy Length: 128 00:23:53.417 Maximum Source Range Count: 128 00:23:53.417 NGUID/EUI64 Never Reused: No 00:23:53.417 Namespace Write Protected: No 00:23:53.417 Number of LBA Formats: 8 00:23:53.417 Current LBA Format: LBA Format #04 00:23:53.417 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.417 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:53.417 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:53.417 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:53.417 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:53.417 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:53.417 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:53.417 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:53.417 00:23:53.417 NVM Specific Namespace Data 00:23:53.417 =========================== 00:23:53.417 Logical Block Storage Tag Mask: 0 00:23:53.417 Protection Information Capabilities: 00:23:53.417 16b Guard Protection Information Storage Tag Support: No 00:23:53.417 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:53.417 Storage Tag Check Read Support: No 00:23:53.417 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.418 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.418 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.418 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.418 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.418 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.418 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.418 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.418 07:21:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:23:53.418 07:21:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:23:53.678 ===================================================== 00:23:53.678 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:53.678 ===================================================== 00:23:53.678 Controller Capabilities/Features 00:23:53.678 ================================ 00:23:53.678 Vendor ID: 1b36 00:23:53.678 Subsystem Vendor ID: 1af4 00:23:53.678 Serial Number: 12342 00:23:53.678 Model Number: QEMU NVMe Ctrl 00:23:53.678 Firmware Version: 8.0.0 00:23:53.678 Recommended Arb Burst: 6 00:23:53.678 IEEE OUI Identifier: 00 54 52 00:23:53.678 Multi-path I/O 00:23:53.678 May have multiple subsystem ports: No 00:23:53.678 May have multiple controllers: No 00:23:53.678 Associated with SR-IOV VF: No 00:23:53.678 Max Data Transfer Size: 524288 00:23:53.678 Max Number of Namespaces: 256 00:23:53.678 Max Number of I/O Queues: 64 00:23:53.678 NVMe Specification Version (VS): 1.4 00:23:53.678 NVMe Specification Version (Identify): 1.4 00:23:53.678 Maximum Queue Entries: 2048 00:23:53.678 Contiguous Queues Required: Yes 00:23:53.678 Arbitration Mechanisms Supported 00:23:53.678 Weighted Round Robin: Not Supported 00:23:53.678 Vendor Specific: Not Supported 00:23:53.678 Reset Timeout: 7500 ms 00:23:53.678 Doorbell Stride: 4 bytes 00:23:53.678 NVM Subsystem Reset: Not Supported 00:23:53.678 Command Sets Supported 00:23:53.678 NVM Command Set: Supported 00:23:53.678 Boot Partition: Not Supported 00:23:53.678 Memory Page Size Minimum: 4096 bytes 00:23:53.678 Memory Page Size Maximum: 65536 bytes 00:23:53.678 Persistent Memory Region: Not Supported 00:23:53.678 Optional Asynchronous Events Supported 00:23:53.678 Namespace Attribute Notices: Supported 00:23:53.679 Firmware Activation Notices: Not Supported 00:23:53.679 ANA Change Notices: Not Supported 00:23:53.679 PLE Aggregate Log Change Notices: Not Supported 00:23:53.679 LBA Status Info Alert Notices: Not Supported 00:23:53.679 EGE Aggregate Log Change Notices: Not Supported 00:23:53.679 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.679 Zone Descriptor Change Notices: Not Supported 00:23:53.679 Discovery Log Change Notices: Not Supported 00:23:53.679 Controller Attributes 00:23:53.679 128-bit Host Identifier: Not Supported 00:23:53.679 Non-Operational Permissive Mode: Not Supported 00:23:53.679 NVM Sets: Not Supported 00:23:53.679 Read Recovery Levels: Not Supported 00:23:53.679 Endurance Groups: Not Supported 00:23:53.679 Predictable Latency Mode: Not Supported 00:23:53.679 Traffic Based Keep ALive: Not Supported 00:23:53.679 Namespace Granularity: Not Supported 00:23:53.679 SQ Associations: Not Supported 00:23:53.679 UUID List: Not Supported 00:23:53.679 Multi-Domain Subsystem: Not Supported 00:23:53.679 Fixed Capacity Management: Not Supported 00:23:53.679 Variable Capacity Management: Not Supported 00:23:53.679 Delete Endurance Group: Not Supported 00:23:53.679 Delete NVM Set: Not Supported 00:23:53.679 Extended LBA Formats Supported: Supported 00:23:53.679 Flexible Data Placement Supported: Not Supported 00:23:53.679 00:23:53.679 Controller Memory Buffer Support 00:23:53.679 ================================ 00:23:53.679 Supported: No 00:23:53.679 00:23:53.679 Persistent Memory Region Support 00:23:53.679 ================================ 00:23:53.679 Supported: No 00:23:53.679 00:23:53.679 Admin Command Set Attributes 00:23:53.679 ============================ 00:23:53.679 Security Send/Receive: Not Supported 00:23:53.679 Format NVM: Supported 00:23:53.679 Firmware Activate/Download: Not Supported 00:23:53.679 Namespace Management: Supported 00:23:53.679 Device Self-Test: Not Supported 00:23:53.679 Directives: Supported 00:23:53.679 NVMe-MI: Not Supported 00:23:53.679 Virtualization Management: Not Supported 00:23:53.679 Doorbell Buffer Config: Supported 00:23:53.679 Get LBA Status Capability: Not Supported 00:23:53.679 Command & Feature Lockdown Capability: Not Supported 00:23:53.679 Abort Command Limit: 4 00:23:53.679 Async Event Request Limit: 4 00:23:53.679 Number of Firmware Slots: N/A 00:23:53.679 Firmware Slot 1 Read-Only: N/A 00:23:53.679 Firmware Activation Without Reset: N/A 00:23:53.679 Multiple Update Detection Support: N/A 00:23:53.679 Firmware Update Granularity: No Information Provided 00:23:53.679 Per-Namespace SMART Log: Yes 00:23:53.679 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.679 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:23:53.679 Command Effects Log Page: Supported 00:23:53.679 Get Log Page Extended Data: Supported 00:23:53.679 Telemetry Log Pages: Not Supported 00:23:53.679 Persistent Event Log Pages: Not Supported 00:23:53.679 Supported Log Pages Log Page: May Support 00:23:53.679 Commands Supported & Effects Log Page: Not Supported 00:23:53.679 Feature Identifiers & Effects Log Page:May Support 00:23:53.679 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.679 Data Area 4 for Telemetry Log: Not Supported 00:23:53.679 Error Log Page Entries Supported: 1 00:23:53.679 Keep Alive: Not Supported 00:23:53.679 00:23:53.679 NVM Command Set Attributes 00:23:53.679 ========================== 00:23:53.679 Submission Queue Entry Size 00:23:53.679 Max: 64 00:23:53.679 Min: 64 00:23:53.679 Completion Queue Entry Size 00:23:53.679 Max: 16 00:23:53.679 Min: 16 00:23:53.679 Number of Namespaces: 256 00:23:53.679 Compare Command: Supported 00:23:53.679 Write Uncorrectable Command: Not Supported 00:23:53.679 Dataset Management Command: Supported 00:23:53.679 Write Zeroes Command: Supported 00:23:53.679 Set Features Save Field: Supported 00:23:53.679 Reservations: Not Supported 00:23:53.679 Timestamp: Supported 00:23:53.679 Copy: Supported 00:23:53.679 Volatile Write Cache: Present 00:23:53.679 Atomic Write Unit (Normal): 1 00:23:53.679 Atomic Write Unit (PFail): 1 00:23:53.679 Atomic Compare & Write Unit: 1 00:23:53.679 Fused Compare & Write: Not Supported 00:23:53.679 Scatter-Gather List 00:23:53.679 SGL Command Set: Supported 00:23:53.679 SGL Keyed: Not Supported 00:23:53.679 SGL Bit Bucket Descriptor: Not Supported 00:23:53.679 SGL Metadata Pointer: Not Supported 00:23:53.679 Oversized SGL: Not Supported 00:23:53.679 SGL Metadata Address: Not Supported 00:23:53.679 SGL Offset: Not Supported 00:23:53.679 Transport SGL Data Block: Not Supported 00:23:53.679 Replay Protected Memory Block: Not Supported 00:23:53.679 00:23:53.679 Firmware Slot Information 00:23:53.679 ========================= 00:23:53.679 Active slot: 1 00:23:53.679 Slot 1 Firmware Revision: 1.0 00:23:53.679 00:23:53.679 00:23:53.679 Commands Supported and Effects 00:23:53.679 ============================== 00:23:53.679 Admin Commands 00:23:53.679 -------------- 00:23:53.679 Delete I/O Submission Queue (00h): Supported 00:23:53.679 Create I/O Submission Queue (01h): Supported 00:23:53.679 Get Log Page (02h): Supported 00:23:53.679 Delete I/O Completion Queue (04h): Supported 00:23:53.679 Create I/O Completion Queue (05h): Supported 00:23:53.679 Identify (06h): Supported 00:23:53.679 Abort (08h): Supported 00:23:53.679 Set Features (09h): Supported 00:23:53.679 Get Features (0Ah): Supported 00:23:53.679 Asynchronous Event Request (0Ch): Supported 00:23:53.679 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:53.679 Directive Send (19h): Supported 00:23:53.679 Directive Receive (1Ah): Supported 00:23:53.679 Virtualization Management (1Ch): Supported 00:23:53.679 Doorbell Buffer Config (7Ch): Supported 00:23:53.679 Format NVM (80h): Supported LBA-Change 00:23:53.679 I/O Commands 00:23:53.679 ------------ 00:23:53.679 Flush (00h): Supported LBA-Change 00:23:53.679 Write (01h): Supported LBA-Change 00:23:53.679 Read (02h): Supported 00:23:53.679 Compare (05h): Supported 00:23:53.679 Write Zeroes (08h): Supported LBA-Change 00:23:53.679 Dataset Management (09h): Supported LBA-Change 00:23:53.679 Unknown (0Ch): Supported 00:23:53.679 Unknown (12h): Supported 00:23:53.679 Copy (19h): Supported LBA-Change 00:23:53.679 Unknown (1Dh): Supported LBA-Change 00:23:53.679 00:23:53.679 Error Log 00:23:53.679 ========= 00:23:53.679 00:23:53.679 Arbitration 00:23:53.679 =========== 00:23:53.679 Arbitration Burst: no limit 00:23:53.679 00:23:53.679 Power Management 00:23:53.680 ================ 00:23:53.680 Number of Power States: 1 00:23:53.680 Current Power State: Power State #0 00:23:53.680 Power State #0: 00:23:53.680 Max Power: 25.00 W 00:23:53.680 Non-Operational State: Operational 00:23:53.680 Entry Latency: 16 microseconds 00:23:53.680 Exit Latency: 4 microseconds 00:23:53.680 Relative Read Throughput: 0 00:23:53.680 Relative Read Latency: 0 00:23:53.680 Relative Write Throughput: 0 00:23:53.680 Relative Write Latency: 0 00:23:53.680 Idle Power: Not Reported 00:23:53.680 Active Power: Not Reported 00:23:53.680 Non-Operational Permissive Mode: Not Supported 00:23:53.680 00:23:53.680 Health Information 00:23:53.680 ================== 00:23:53.680 Critical Warnings: 00:23:53.680 Available Spare Space: OK 00:23:53.680 Temperature: OK 00:23:53.680 Device Reliability: OK 00:23:53.680 Read Only: No 00:23:53.680 Volatile Memory Backup: OK 00:23:53.680 Current Temperature: 323 Kelvin (50 Celsius) 00:23:53.680 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:53.680 Available Spare: 0% 00:23:53.680 Available Spare Threshold: 0% 00:23:53.680 Life Percentage Used: 0% 00:23:53.680 Data Units Read: 2054 00:23:53.680 Data Units Written: 1842 00:23:53.680 Host Read Commands: 94770 00:23:53.680 Host Write Commands: 93039 00:23:53.680 Controller Busy Time: 0 minutes 00:23:53.680 Power Cycles: 0 00:23:53.680 Power On Hours: 0 hours 00:23:53.680 Unsafe Shutdowns: 0 00:23:53.680 Unrecoverable Media Errors: 0 00:23:53.680 Lifetime Error Log Entries: 0 00:23:53.680 Warning Temperature Time: 0 minutes 00:23:53.680 Critical Temperature Time: 0 minutes 00:23:53.680 00:23:53.680 Number of Queues 00:23:53.680 ================ 00:23:53.680 Number of I/O Submission Queues: 64 00:23:53.680 Number of I/O Completion Queues: 64 00:23:53.680 00:23:53.680 ZNS Specific Controller Data 00:23:53.680 ============================ 00:23:53.680 Zone Append Size Limit: 0 00:23:53.680 00:23:53.680 00:23:53.680 Active Namespaces 00:23:53.680 ================= 00:23:53.680 Namespace ID:1 00:23:53.680 Error Recovery Timeout: Unlimited 00:23:53.680 Command Set Identifier: NVM (00h) 00:23:53.680 Deallocate: Supported 00:23:53.680 Deallocated/Unwritten Error: Supported 00:23:53.680 Deallocated Read Value: All 0x00 00:23:53.680 Deallocate in Write Zeroes: Not Supported 00:23:53.680 Deallocated Guard Field: 0xFFFF 00:23:53.680 Flush: Supported 00:23:53.680 Reservation: Not Supported 00:23:53.680 Namespace Sharing Capabilities: Private 00:23:53.680 Size (in LBAs): 1048576 (4GiB) 00:23:53.680 Capacity (in LBAs): 1048576 (4GiB) 00:23:53.680 Utilization (in LBAs): 1048576 (4GiB) 00:23:53.680 Thin Provisioning: Not Supported 00:23:53.680 Per-NS Atomic Units: No 00:23:53.680 Maximum Single Source Range Length: 128 00:23:53.680 Maximum Copy Length: 128 00:23:53.680 Maximum Source Range Count: 128 00:23:53.680 NGUID/EUI64 Never Reused: No 00:23:53.680 Namespace Write Protected: No 00:23:53.680 Number of LBA Formats: 8 00:23:53.680 Current LBA Format: LBA Format #04 00:23:53.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.680 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:53.680 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:53.680 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:53.680 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:53.680 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:53.680 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:53.680 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:53.680 00:23:53.680 NVM Specific Namespace Data 00:23:53.680 =========================== 00:23:53.680 Logical Block Storage Tag Mask: 0 00:23:53.680 Protection Information Capabilities: 00:23:53.680 16b Guard Protection Information Storage Tag Support: No 00:23:53.680 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:53.680 Storage Tag Check Read Support: No 00:23:53.680 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Namespace ID:2 00:23:53.680 Error Recovery Timeout: Unlimited 00:23:53.680 Command Set Identifier: NVM (00h) 00:23:53.680 Deallocate: Supported 00:23:53.680 Deallocated/Unwritten Error: Supported 00:23:53.680 Deallocated Read Value: All 0x00 00:23:53.680 Deallocate in Write Zeroes: Not Supported 00:23:53.680 Deallocated Guard Field: 0xFFFF 00:23:53.680 Flush: Supported 00:23:53.680 Reservation: Not Supported 00:23:53.680 Namespace Sharing Capabilities: Private 00:23:53.680 Size (in LBAs): 1048576 (4GiB) 00:23:53.680 Capacity (in LBAs): 1048576 (4GiB) 00:23:53.680 Utilization (in LBAs): 1048576 (4GiB) 00:23:53.680 Thin Provisioning: Not Supported 00:23:53.680 Per-NS Atomic Units: No 00:23:53.680 Maximum Single Source Range Length: 128 00:23:53.680 Maximum Copy Length: 128 00:23:53.680 Maximum Source Range Count: 128 00:23:53.680 NGUID/EUI64 Never Reused: No 00:23:53.680 Namespace Write Protected: No 00:23:53.680 Number of LBA Formats: 8 00:23:53.680 Current LBA Format: LBA Format #04 00:23:53.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.680 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:53.680 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:53.680 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:53.680 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:53.680 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:53.680 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:53.680 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:53.680 00:23:53.680 NVM Specific Namespace Data 00:23:53.680 =========================== 00:23:53.680 Logical Block Storage Tag Mask: 0 00:23:53.680 Protection Information Capabilities: 00:23:53.680 16b Guard Protection Information Storage Tag Support: No 00:23:53.680 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:53.680 Storage Tag Check Read Support: No 00:23:53.680 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.680 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Namespace ID:3 00:23:53.681 Error Recovery Timeout: Unlimited 00:23:53.681 Command Set Identifier: NVM (00h) 00:23:53.681 Deallocate: Supported 00:23:53.681 Deallocated/Unwritten Error: Supported 00:23:53.681 Deallocated Read Value: All 0x00 00:23:53.681 Deallocate in Write Zeroes: Not Supported 00:23:53.681 Deallocated Guard Field: 0xFFFF 00:23:53.681 Flush: Supported 00:23:53.681 Reservation: Not Supported 00:23:53.681 Namespace Sharing Capabilities: Private 00:23:53.681 Size (in LBAs): 1048576 (4GiB) 00:23:53.681 Capacity (in LBAs): 1048576 (4GiB) 00:23:53.681 Utilization (in LBAs): 1048576 (4GiB) 00:23:53.681 Thin Provisioning: Not Supported 00:23:53.681 Per-NS Atomic Units: No 00:23:53.681 Maximum Single Source Range Length: 128 00:23:53.681 Maximum Copy Length: 128 00:23:53.681 Maximum Source Range Count: 128 00:23:53.681 NGUID/EUI64 Never Reused: No 00:23:53.681 Namespace Write Protected: No 00:23:53.681 Number of LBA Formats: 8 00:23:53.681 Current LBA Format: LBA Format #04 00:23:53.681 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.681 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:53.681 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:53.681 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:53.681 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:53.681 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:53.681 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:53.681 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:53.681 00:23:53.681 NVM Specific Namespace Data 00:23:53.681 =========================== 00:23:53.681 Logical Block Storage Tag Mask: 0 00:23:53.681 Protection Information Capabilities: 00:23:53.681 16b Guard Protection Information Storage Tag Support: No 00:23:53.681 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:53.681 Storage Tag Check Read Support: No 00:23:53.681 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.681 07:21:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:23:53.681 07:21:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:23:53.981 ===================================================== 00:23:53.981 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:53.981 ===================================================== 00:23:53.981 Controller Capabilities/Features 00:23:53.981 ================================ 00:23:53.981 Vendor ID: 1b36 00:23:53.981 Subsystem Vendor ID: 1af4 00:23:53.981 Serial Number: 12343 00:23:53.981 Model Number: QEMU NVMe Ctrl 00:23:53.981 Firmware Version: 8.0.0 00:23:53.981 Recommended Arb Burst: 6 00:23:53.981 IEEE OUI Identifier: 00 54 52 00:23:53.981 Multi-path I/O 00:23:53.981 May have multiple subsystem ports: No 00:23:53.981 May have multiple controllers: Yes 00:23:53.981 Associated with SR-IOV VF: No 00:23:53.981 Max Data Transfer Size: 524288 00:23:53.981 Max Number of Namespaces: 256 00:23:53.981 Max Number of I/O Queues: 64 00:23:53.981 NVMe Specification Version (VS): 1.4 00:23:53.981 NVMe Specification Version (Identify): 1.4 00:23:53.981 Maximum Queue Entries: 2048 00:23:53.981 Contiguous Queues Required: Yes 00:23:53.981 Arbitration Mechanisms Supported 00:23:53.981 Weighted Round Robin: Not Supported 00:23:53.981 Vendor Specific: Not Supported 00:23:53.981 Reset Timeout: 7500 ms 00:23:53.981 Doorbell Stride: 4 bytes 00:23:53.981 NVM Subsystem Reset: Not Supported 00:23:53.981 Command Sets Supported 00:23:53.981 NVM Command Set: Supported 00:23:53.981 Boot Partition: Not Supported 00:23:53.981 Memory Page Size Minimum: 4096 bytes 00:23:53.981 Memory Page Size Maximum: 65536 bytes 00:23:53.981 Persistent Memory Region: Not Supported 00:23:53.981 Optional Asynchronous Events Supported 00:23:53.981 Namespace Attribute Notices: Supported 00:23:53.981 Firmware Activation Notices: Not Supported 00:23:53.981 ANA Change Notices: Not Supported 00:23:53.981 PLE Aggregate Log Change Notices: Not Supported 00:23:53.981 LBA Status Info Alert Notices: Not Supported 00:23:53.981 EGE Aggregate Log Change Notices: Not Supported 00:23:53.981 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.981 Zone Descriptor Change Notices: Not Supported 00:23:53.981 Discovery Log Change Notices: Not Supported 00:23:53.981 Controller Attributes 00:23:53.981 128-bit Host Identifier: Not Supported 00:23:53.981 Non-Operational Permissive Mode: Not Supported 00:23:53.981 NVM Sets: Not Supported 00:23:53.981 Read Recovery Levels: Not Supported 00:23:53.981 Endurance Groups: Supported 00:23:53.981 Predictable Latency Mode: Not Supported 00:23:53.981 Traffic Based Keep ALive: Not Supported 00:23:53.981 Namespace Granularity: Not Supported 00:23:53.981 SQ Associations: Not Supported 00:23:53.981 UUID List: Not Supported 00:23:53.981 Multi-Domain Subsystem: Not Supported 00:23:53.981 Fixed Capacity Management: Not Supported 00:23:53.981 Variable Capacity Management: Not Supported 00:23:53.981 Delete Endurance Group: Not Supported 00:23:53.981 Delete NVM Set: Not Supported 00:23:53.981 Extended LBA Formats Supported: Supported 00:23:53.981 Flexible Data Placement Supported: Supported 00:23:53.981 00:23:53.981 Controller Memory Buffer Support 00:23:53.981 ================================ 00:23:53.981 Supported: No 00:23:53.981 00:23:53.981 Persistent Memory Region Support 00:23:53.981 ================================ 00:23:53.981 Supported: No 00:23:53.981 00:23:53.981 Admin Command Set Attributes 00:23:53.981 ============================ 00:23:53.981 Security Send/Receive: Not Supported 00:23:53.981 Format NVM: Supported 00:23:53.981 Firmware Activate/Download: Not Supported 00:23:53.981 Namespace Management: Supported 00:23:53.981 Device Self-Test: Not Supported 00:23:53.981 Directives: Supported 00:23:53.981 NVMe-MI: Not Supported 00:23:53.981 Virtualization Management: Not Supported 00:23:53.981 Doorbell Buffer Config: Supported 00:23:53.981 Get LBA Status Capability: Not Supported 00:23:53.981 Command & Feature Lockdown Capability: Not Supported 00:23:53.981 Abort Command Limit: 4 00:23:53.981 Async Event Request Limit: 4 00:23:53.981 Number of Firmware Slots: N/A 00:23:53.982 Firmware Slot 1 Read-Only: N/A 00:23:53.982 Firmware Activation Without Reset: N/A 00:23:53.982 Multiple Update Detection Support: N/A 00:23:53.982 Firmware Update Granularity: No Information Provided 00:23:53.982 Per-Namespace SMART Log: Yes 00:23:53.982 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.982 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:23:53.982 Command Effects Log Page: Supported 00:23:53.982 Get Log Page Extended Data: Supported 00:23:53.982 Telemetry Log Pages: Not Supported 00:23:53.982 Persistent Event Log Pages: Not Supported 00:23:53.982 Supported Log Pages Log Page: May Support 00:23:53.982 Commands Supported & Effects Log Page: Not Supported 00:23:53.982 Feature Identifiers & Effects Log Page:May Support 00:23:53.982 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.982 Data Area 4 for Telemetry Log: Not Supported 00:23:53.982 Error Log Page Entries Supported: 1 00:23:53.982 Keep Alive: Not Supported 00:23:53.982 00:23:53.982 NVM Command Set Attributes 00:23:53.982 ========================== 00:23:53.982 Submission Queue Entry Size 00:23:53.982 Max: 64 00:23:53.982 Min: 64 00:23:53.982 Completion Queue Entry Size 00:23:53.982 Max: 16 00:23:53.982 Min: 16 00:23:53.982 Number of Namespaces: 256 00:23:53.982 Compare Command: Supported 00:23:53.982 Write Uncorrectable Command: Not Supported 00:23:53.982 Dataset Management Command: Supported 00:23:53.982 Write Zeroes Command: Supported 00:23:53.982 Set Features Save Field: Supported 00:23:53.982 Reservations: Not Supported 00:23:53.982 Timestamp: Supported 00:23:53.982 Copy: Supported 00:23:53.982 Volatile Write Cache: Present 00:23:53.982 Atomic Write Unit (Normal): 1 00:23:53.982 Atomic Write Unit (PFail): 1 00:23:53.982 Atomic Compare & Write Unit: 1 00:23:53.982 Fused Compare & Write: Not Supported 00:23:53.982 Scatter-Gather List 00:23:53.982 SGL Command Set: Supported 00:23:53.982 SGL Keyed: Not Supported 00:23:53.982 SGL Bit Bucket Descriptor: Not Supported 00:23:53.982 SGL Metadata Pointer: Not Supported 00:23:53.982 Oversized SGL: Not Supported 00:23:53.982 SGL Metadata Address: Not Supported 00:23:53.982 SGL Offset: Not Supported 00:23:53.982 Transport SGL Data Block: Not Supported 00:23:53.982 Replay Protected Memory Block: Not Supported 00:23:53.982 00:23:53.982 Firmware Slot Information 00:23:53.982 ========================= 00:23:53.982 Active slot: 1 00:23:53.982 Slot 1 Firmware Revision: 1.0 00:23:53.982 00:23:53.982 00:23:53.982 Commands Supported and Effects 00:23:53.982 ============================== 00:23:53.982 Admin Commands 00:23:53.982 -------------- 00:23:53.982 Delete I/O Submission Queue (00h): Supported 00:23:53.982 Create I/O Submission Queue (01h): Supported 00:23:53.982 Get Log Page (02h): Supported 00:23:53.982 Delete I/O Completion Queue (04h): Supported 00:23:53.982 Create I/O Completion Queue (05h): Supported 00:23:53.982 Identify (06h): Supported 00:23:53.982 Abort (08h): Supported 00:23:53.982 Set Features (09h): Supported 00:23:53.982 Get Features (0Ah): Supported 00:23:53.982 Asynchronous Event Request (0Ch): Supported 00:23:53.982 Namespace Attachment (15h): Supported NS-Inventory-Change 00:23:53.982 Directive Send (19h): Supported 00:23:53.982 Directive Receive (1Ah): Supported 00:23:53.982 Virtualization Management (1Ch): Supported 00:23:53.982 Doorbell Buffer Config (7Ch): Supported 00:23:53.982 Format NVM (80h): Supported LBA-Change 00:23:53.982 I/O Commands 00:23:53.982 ------------ 00:23:53.982 Flush (00h): Supported LBA-Change 00:23:53.982 Write (01h): Supported LBA-Change 00:23:53.982 Read (02h): Supported 00:23:53.982 Compare (05h): Supported 00:23:53.982 Write Zeroes (08h): Supported LBA-Change 00:23:53.982 Dataset Management (09h): Supported LBA-Change 00:23:53.982 Unknown (0Ch): Supported 00:23:53.982 Unknown (12h): Supported 00:23:53.982 Copy (19h): Supported LBA-Change 00:23:53.982 Unknown (1Dh): Supported LBA-Change 00:23:53.982 00:23:53.982 Error Log 00:23:53.982 ========= 00:23:53.982 00:23:53.982 Arbitration 00:23:53.982 =========== 00:23:53.982 Arbitration Burst: no limit 00:23:53.982 00:23:53.982 Power Management 00:23:53.982 ================ 00:23:53.982 Number of Power States: 1 00:23:53.982 Current Power State: Power State #0 00:23:53.982 Power State #0: 00:23:53.982 Max Power: 25.00 W 00:23:53.982 Non-Operational State: Operational 00:23:53.982 Entry Latency: 16 microseconds 00:23:53.982 Exit Latency: 4 microseconds 00:23:53.982 Relative Read Throughput: 0 00:23:53.982 Relative Read Latency: 0 00:23:53.982 Relative Write Throughput: 0 00:23:53.982 Relative Write Latency: 0 00:23:53.982 Idle Power: Not Reported 00:23:53.982 Active Power: Not Reported 00:23:53.982 Non-Operational Permissive Mode: Not Supported 00:23:53.982 00:23:53.982 Health Information 00:23:53.982 ================== 00:23:53.982 Critical Warnings: 00:23:53.982 Available Spare Space: OK 00:23:53.982 Temperature: OK 00:23:53.982 Device Reliability: OK 00:23:53.982 Read Only: No 00:23:53.982 Volatile Memory Backup: OK 00:23:53.982 Current Temperature: 323 Kelvin (50 Celsius) 00:23:53.982 Temperature Threshold: 343 Kelvin (70 Celsius) 00:23:53.982 Available Spare: 0% 00:23:53.982 Available Spare Threshold: 0% 00:23:53.982 Life Percentage Used: 0% 00:23:53.982 Data Units Read: 747 00:23:53.982 Data Units Written: 676 00:23:53.982 Host Read Commands: 32193 00:23:53.982 Host Write Commands: 31616 00:23:53.982 Controller Busy Time: 0 minutes 00:23:53.982 Power Cycles: 0 00:23:53.982 Power On Hours: 0 hours 00:23:53.982 Unsafe Shutdowns: 0 00:23:53.982 Unrecoverable Media Errors: 0 00:23:53.982 Lifetime Error Log Entries: 0 00:23:53.982 Warning Temperature Time: 0 minutes 00:23:53.982 Critical Temperature Time: 0 minutes 00:23:53.982 00:23:53.982 Number of Queues 00:23:53.982 ================ 00:23:53.982 Number of I/O Submission Queues: 64 00:23:53.982 Number of I/O Completion Queues: 64 00:23:53.982 00:23:53.982 ZNS Specific Controller Data 00:23:53.982 ============================ 00:23:53.982 Zone Append Size Limit: 0 00:23:53.982 00:23:53.982 00:23:53.982 Active Namespaces 00:23:53.982 ================= 00:23:53.982 Namespace ID:1 00:23:53.982 Error Recovery Timeout: Unlimited 00:23:53.982 Command Set Identifier: NVM (00h) 00:23:53.982 Deallocate: Supported 00:23:53.982 Deallocated/Unwritten Error: Supported 00:23:53.982 Deallocated Read Value: All 0x00 00:23:53.982 Deallocate in Write Zeroes: Not Supported 00:23:53.982 Deallocated Guard Field: 0xFFFF 00:23:53.982 Flush: Supported 00:23:53.982 Reservation: Not Supported 00:23:53.982 Namespace Sharing Capabilities: Multiple Controllers 00:23:53.982 Size (in LBAs): 262144 (1GiB) 00:23:53.982 Capacity (in LBAs): 262144 (1GiB) 00:23:53.982 Utilization (in LBAs): 262144 (1GiB) 00:23:53.982 Thin Provisioning: Not Supported 00:23:53.982 Per-NS Atomic Units: No 00:23:53.982 Maximum Single Source Range Length: 128 00:23:53.982 Maximum Copy Length: 128 00:23:53.982 Maximum Source Range Count: 128 00:23:53.982 NGUID/EUI64 Never Reused: No 00:23:53.982 Namespace Write Protected: No 00:23:53.982 Endurance group ID: 1 00:23:53.982 Number of LBA Formats: 8 00:23:53.982 Current LBA Format: LBA Format #04 00:23:53.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.982 LBA Format #01: Data Size: 512 Metadata Size: 8 00:23:53.982 LBA Format #02: Data Size: 512 Metadata Size: 16 00:23:53.982 LBA Format #03: Data Size: 512 Metadata Size: 64 00:23:53.982 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:23:53.982 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:23:53.982 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:23:53.982 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:23:53.982 00:23:53.982 Get Feature FDP: 00:23:53.982 ================ 00:23:53.982 Enabled: Yes 00:23:53.982 FDP configuration index: 0 00:23:53.982 00:23:53.982 FDP configurations log page 00:23:53.982 =========================== 00:23:53.982 Number of FDP configurations: 1 00:23:53.982 Version: 0 00:23:53.982 Size: 112 00:23:53.982 FDP Configuration Descriptor: 0 00:23:53.982 Descriptor Size: 96 00:23:53.982 Reclaim Group Identifier format: 2 00:23:53.982 FDP Volatile Write Cache: Not Present 00:23:53.982 FDP Configuration: Valid 00:23:53.982 Vendor Specific Size: 0 00:23:53.982 Number of Reclaim Groups: 2 00:23:53.982 Number of Recalim Unit Handles: 8 00:23:53.982 Max Placement Identifiers: 128 00:23:53.982 Number of Namespaces Suppprted: 256 00:23:53.982 Reclaim unit Nominal Size: 6000000 bytes 00:23:53.982 Estimated Reclaim Unit Time Limit: Not Reported 00:23:53.983 RUH Desc #000: RUH Type: Initially Isolated 00:23:53.983 RUH Desc #001: RUH Type: Initially Isolated 00:23:53.983 RUH Desc #002: RUH Type: Initially Isolated 00:23:53.983 RUH Desc #003: RUH Type: Initially Isolated 00:23:53.983 RUH Desc #004: RUH Type: Initially Isolated 00:23:53.983 RUH Desc #005: RUH Type: Initially Isolated 00:23:53.983 RUH Desc #006: RUH Type: Initially Isolated 00:23:53.983 RUH Desc #007: RUH Type: Initially Isolated 00:23:53.983 00:23:53.983 FDP reclaim unit handle usage log page 00:23:53.983 ====================================== 00:23:53.983 Number of Reclaim Unit Handles: 8 00:23:53.983 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:23:53.983 RUH Usage Desc #001: RUH Attributes: Unused 00:23:53.983 RUH Usage Desc #002: RUH Attributes: Unused 00:23:53.983 RUH Usage Desc #003: RUH Attributes: Unused 00:23:53.983 RUH Usage Desc #004: RUH Attributes: Unused 00:23:53.983 RUH Usage Desc #005: RUH Attributes: Unused 00:23:53.983 RUH Usage Desc #006: RUH Attributes: Unused 00:23:53.983 RUH Usage Desc #007: RUH Attributes: Unused 00:23:53.983 00:23:53.983 FDP statistics log page 00:23:53.983 ======================= 00:23:53.983 Host bytes with metadata written: 425304064 00:23:53.983 Media bytes with metadata written: 425349120 00:23:53.983 Media bytes erased: 0 00:23:53.983 00:23:53.983 FDP events log page 00:23:53.983 =================== 00:23:53.983 Number of FDP events: 0 00:23:53.983 00:23:53.983 NVM Specific Namespace Data 00:23:53.983 =========================== 00:23:53.983 Logical Block Storage Tag Mask: 0 00:23:53.983 Protection Information Capabilities: 00:23:53.983 16b Guard Protection Information Storage Tag Support: No 00:23:53.983 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:23:53.983 Storage Tag Check Read Support: No 00:23:53.983 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.983 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.983 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.983 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.983 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.983 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.983 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.983 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:23:53.983 00:23:53.983 real 0m2.124s 00:23:53.983 user 0m0.807s 00:23:53.983 sys 0m1.078s 00:23:53.983 07:21:18 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.983 ************************************ 00:23:53.983 END TEST nvme_identify 00:23:53.983 ************************************ 00:23:53.983 07:21:18 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.241 07:21:18 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:23:54.241 07:21:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:54.241 07:21:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.241 07:21:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:54.241 ************************************ 00:23:54.241 START TEST nvme_perf 00:23:54.241 ************************************ 00:23:54.241 07:21:18 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:23:54.241 07:21:18 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:23:55.619 Initializing NVMe Controllers 00:23:55.619 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:55.619 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:55.619 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:55.619 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:55.619 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:55.619 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:23:55.619 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:23:55.619 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:23:55.619 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:23:55.619 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:23:55.619 Initialization complete. Launching workers. 00:23:55.619 ======================================================== 00:23:55.619 Latency(us) 00:23:55.619 Device Information : IOPS MiB/s Average min max 00:23:55.619 PCIE (0000:00:10.0) NSID 1 from core 0: 10132.25 118.74 12654.19 9030.50 49061.67 00:23:55.619 PCIE (0000:00:11.0) NSID 1 from core 0: 10132.25 118.74 12611.79 9075.44 45080.09 00:23:55.619 PCIE (0000:00:13.0) NSID 1 from core 0: 10132.25 118.74 12565.02 8891.28 41740.42 00:23:55.619 PCIE (0000:00:12.0) NSID 1 from core 0: 10132.25 118.74 12518.74 8878.80 37724.03 00:23:55.619 PCIE (0000:00:12.0) NSID 2 from core 0: 10132.25 118.74 12473.61 9092.38 33792.77 00:23:55.619 PCIE (0000:00:12.0) NSID 3 from core 0: 10132.25 118.74 12428.18 9146.93 29752.06 00:23:55.619 ======================================================== 00:23:55.619 Total : 60793.49 712.42 12541.92 8878.80 49061.67 00:23:55.619 00:23:55.619 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:23:55.619 ================================================================================= 00:23:55.619 1.00000% : 9799.192us 00:23:55.619 10.00000% : 10922.667us 00:23:55.619 25.00000% : 11484.404us 00:23:55.619 50.00000% : 12170.971us 00:23:55.619 75.00000% : 13044.785us 00:23:55.619 90.00000% : 13856.183us 00:23:55.619 95.00000% : 14480.335us 00:23:55.619 98.00000% : 17351.436us 00:23:55.619 99.00000% : 38198.126us 00:23:55.619 99.50000% : 46436.937us 00:23:55.619 99.90000% : 48683.886us 00:23:55.619 99.99000% : 49183.208us 00:23:55.619 99.99900% : 49183.208us 00:23:55.619 99.99990% : 49183.208us 00:23:55.619 99.99999% : 49183.208us 00:23:55.619 00:23:55.619 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:23:55.619 ================================================================================= 00:23:55.619 1.00000% : 9861.608us 00:23:55.619 10.00000% : 11047.497us 00:23:55.619 25.00000% : 11484.404us 00:23:55.620 50.00000% : 12108.556us 00:23:55.620 75.00000% : 13044.785us 00:23:55.620 90.00000% : 13793.768us 00:23:55.620 95.00000% : 14542.750us 00:23:55.620 98.00000% : 17476.267us 00:23:55.620 99.00000% : 35202.194us 00:23:55.620 99.50000% : 42692.023us 00:23:55.620 99.90000% : 44689.310us 00:23:55.620 99.99000% : 45188.632us 00:23:55.620 99.99900% : 45188.632us 00:23:55.620 99.99990% : 45188.632us 00:23:55.620 99.99999% : 45188.632us 00:23:55.620 00:23:55.620 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:23:55.620 ================================================================================= 00:23:55.620 1.00000% : 9799.192us 00:23:55.620 10.00000% : 10985.082us 00:23:55.620 25.00000% : 11484.404us 00:23:55.620 50.00000% : 12108.556us 00:23:55.620 75.00000% : 13044.785us 00:23:55.620 90.00000% : 13793.768us 00:23:55.620 95.00000% : 14729.996us 00:23:55.620 98.00000% : 17601.097us 00:23:55.620 99.00000% : 31582.110us 00:23:55.620 99.50000% : 39446.430us 00:23:55.620 99.90000% : 41443.718us 00:23:55.620 99.99000% : 41693.379us 00:23:55.620 99.99900% : 41943.040us 00:23:55.620 99.99990% : 41943.040us 00:23:55.620 99.99999% : 41943.040us 00:23:55.620 00:23:55.620 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:23:55.620 ================================================================================= 00:23:55.620 1.00000% : 9736.777us 00:23:55.620 10.00000% : 10985.082us 00:23:55.620 25.00000% : 11484.404us 00:23:55.620 50.00000% : 12108.556us 00:23:55.620 75.00000% : 13044.785us 00:23:55.620 90.00000% : 13793.768us 00:23:55.620 95.00000% : 14729.996us 00:23:55.620 98.00000% : 17725.928us 00:23:55.620 99.00000% : 27587.535us 00:23:55.620 99.50000% : 35202.194us 00:23:55.620 99.90000% : 37449.143us 00:23:55.620 99.99000% : 37698.804us 00:23:55.620 99.99900% : 37948.465us 00:23:55.620 99.99990% : 37948.465us 00:23:55.620 99.99999% : 37948.465us 00:23:55.620 00:23:55.620 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:23:55.620 ================================================================================= 00:23:55.620 1.00000% : 9736.777us 00:23:55.620 10.00000% : 11047.497us 00:23:55.620 25.00000% : 11484.404us 00:23:55.620 50.00000% : 12170.971us 00:23:55.620 75.00000% : 13044.785us 00:23:55.620 90.00000% : 13793.768us 00:23:55.620 95.00000% : 14542.750us 00:23:55.620 98.00000% : 17226.606us 00:23:55.620 99.00000% : 23592.960us 00:23:55.620 99.50000% : 31332.450us 00:23:55.620 99.90000% : 33454.568us 00:23:55.620 99.99000% : 33953.890us 00:23:55.620 99.99900% : 33953.890us 00:23:55.620 99.99990% : 33953.890us 00:23:55.620 99.99999% : 33953.890us 00:23:55.620 00:23:55.620 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:23:55.620 ================================================================================= 00:23:55.620 1.00000% : 9861.608us 00:23:55.620 10.00000% : 10985.082us 00:23:55.620 25.00000% : 11484.404us 00:23:55.620 50.00000% : 12170.971us 00:23:55.620 75.00000% : 13044.785us 00:23:55.620 90.00000% : 13793.768us 00:23:55.620 95.00000% : 14480.335us 00:23:55.620 98.00000% : 17351.436us 00:23:55.620 99.00000% : 19723.215us 00:23:55.620 99.50000% : 27462.705us 00:23:55.620 99.90000% : 29335.162us 00:23:55.620 99.99000% : 29709.653us 00:23:55.620 99.99900% : 29834.484us 00:23:55.620 99.99990% : 29834.484us 00:23:55.620 99.99999% : 29834.484us 00:23:55.620 00:23:55.620 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:23:55.620 ============================================================================== 00:23:55.620 Range in us Cumulative IO count 00:23:55.620 8987.794 - 9050.210: 0.0197% ( 2) 00:23:55.620 9050.210 - 9112.625: 0.0590% ( 4) 00:23:55.620 9112.625 - 9175.040: 0.1081% ( 5) 00:23:55.620 9175.040 - 9237.455: 0.1769% ( 7) 00:23:55.620 9237.455 - 9299.870: 0.2260% ( 5) 00:23:55.620 9299.870 - 9362.286: 0.2850% ( 6) 00:23:55.620 9362.286 - 9424.701: 0.3243% ( 4) 00:23:55.620 9424.701 - 9487.116: 0.3931% ( 7) 00:23:55.620 9487.116 - 9549.531: 0.5012% ( 11) 00:23:55.620 9549.531 - 9611.947: 0.6191% ( 12) 00:23:55.620 9611.947 - 9674.362: 0.7665% ( 15) 00:23:55.620 9674.362 - 9736.777: 0.8943% ( 13) 00:23:55.620 9736.777 - 9799.192: 1.0318% ( 14) 00:23:55.620 9799.192 - 9861.608: 1.1498% ( 12) 00:23:55.620 9861.608 - 9924.023: 1.3070% ( 16) 00:23:55.620 9924.023 - 9986.438: 1.5134% ( 21) 00:23:55.620 9986.438 - 10048.853: 1.7394% ( 23) 00:23:55.620 10048.853 - 10111.269: 2.0244% ( 29) 00:23:55.620 10111.269 - 10173.684: 2.2700% ( 25) 00:23:55.620 10173.684 - 10236.099: 2.6533% ( 39) 00:23:55.620 10236.099 - 10298.514: 3.0759% ( 43) 00:23:55.620 10298.514 - 10360.930: 3.5476% ( 48) 00:23:55.620 10360.930 - 10423.345: 4.0684% ( 53) 00:23:55.620 10423.345 - 10485.760: 4.6089% ( 55) 00:23:55.620 10485.760 - 10548.175: 5.1494% ( 55) 00:23:55.620 10548.175 - 10610.590: 5.8569% ( 72) 00:23:55.620 10610.590 - 10673.006: 6.6333% ( 79) 00:23:55.620 10673.006 - 10735.421: 7.4194% ( 80) 00:23:55.620 10735.421 - 10797.836: 8.3235% ( 92) 00:23:55.620 10797.836 - 10860.251: 9.2079% ( 90) 00:23:55.620 10860.251 - 10922.667: 10.2005% ( 101) 00:23:55.620 10922.667 - 10985.082: 11.3601% ( 118) 00:23:55.620 10985.082 - 11047.497: 12.6965% ( 136) 00:23:55.620 11047.497 - 11109.912: 14.2099% ( 154) 00:23:55.620 11109.912 - 11172.328: 15.9591% ( 178) 00:23:55.620 11172.328 - 11234.743: 17.7771% ( 185) 00:23:55.620 11234.743 - 11297.158: 19.7131% ( 197) 00:23:55.620 11297.158 - 11359.573: 21.8455% ( 217) 00:23:55.620 11359.573 - 11421.989: 23.7421% ( 193) 00:23:55.620 11421.989 - 11484.404: 25.9827% ( 228) 00:23:55.620 11484.404 - 11546.819: 28.0562% ( 211) 00:23:55.620 11546.819 - 11609.234: 30.1789% ( 216) 00:23:55.620 11609.234 - 11671.650: 32.2425% ( 210) 00:23:55.620 11671.650 - 11734.065: 34.5912% ( 239) 00:23:55.620 11734.065 - 11796.480: 36.8121% ( 226) 00:23:55.620 11796.480 - 11858.895: 39.0035% ( 223) 00:23:55.620 11858.895 - 11921.310: 41.3325% ( 237) 00:23:55.620 11921.310 - 11983.726: 43.6321% ( 234) 00:23:55.620 11983.726 - 12046.141: 46.0495% ( 246) 00:23:55.620 12046.141 - 12108.556: 48.3294% ( 232) 00:23:55.620 12108.556 - 12170.971: 50.6682% ( 238) 00:23:55.620 12170.971 - 12233.387: 53.1643% ( 254) 00:23:55.620 12233.387 - 12295.802: 55.5326% ( 241) 00:23:55.620 12295.802 - 12358.217: 57.8223% ( 233) 00:23:55.620 12358.217 - 12420.632: 59.9155% ( 213) 00:23:55.620 12420.632 - 12483.048: 61.8023% ( 192) 00:23:55.620 12483.048 - 12545.463: 63.5122% ( 174) 00:23:55.620 12545.463 - 12607.878: 65.2417% ( 176) 00:23:55.620 12607.878 - 12670.293: 66.9615% ( 175) 00:23:55.620 12670.293 - 12732.709: 68.6714% ( 174) 00:23:55.620 12732.709 - 12795.124: 70.1847% ( 154) 00:23:55.620 12795.124 - 12857.539: 71.7669% ( 161) 00:23:55.620 12857.539 - 12919.954: 73.2901% ( 155) 00:23:55.621 12919.954 - 12982.370: 74.8035% ( 154) 00:23:55.621 12982.370 - 13044.785: 76.1989% ( 142) 00:23:55.621 13044.785 - 13107.200: 77.6238% ( 145) 00:23:55.621 13107.200 - 13169.615: 79.0389% ( 144) 00:23:55.621 13169.615 - 13232.030: 80.2869% ( 127) 00:23:55.621 13232.030 - 13294.446: 81.4269% ( 116) 00:23:55.621 13294.446 - 13356.861: 82.6061% ( 120) 00:23:55.621 13356.861 - 13419.276: 83.6675% ( 108) 00:23:55.621 13419.276 - 13481.691: 84.7681% ( 112) 00:23:55.621 13481.691 - 13544.107: 85.8491% ( 110) 00:23:55.621 13544.107 - 13606.522: 86.8907% ( 106) 00:23:55.621 13606.522 - 13668.937: 87.9324% ( 106) 00:23:55.621 13668.937 - 13731.352: 88.8070% ( 89) 00:23:55.621 13731.352 - 13793.768: 89.6128% ( 82) 00:23:55.621 13793.768 - 13856.183: 90.4481% ( 85) 00:23:55.621 13856.183 - 13918.598: 91.1262% ( 69) 00:23:55.621 13918.598 - 13981.013: 91.9418% ( 83) 00:23:55.621 13981.013 - 14043.429: 92.5708% ( 64) 00:23:55.621 14043.429 - 14105.844: 93.1899% ( 63) 00:23:55.621 14105.844 - 14168.259: 93.7402% ( 56) 00:23:55.621 14168.259 - 14230.674: 94.0055% ( 27) 00:23:55.621 14230.674 - 14293.090: 94.3494% ( 35) 00:23:55.621 14293.090 - 14355.505: 94.6148% ( 27) 00:23:55.621 14355.505 - 14417.920: 94.8998% ( 29) 00:23:55.621 14417.920 - 14480.335: 95.0963% ( 20) 00:23:55.621 14480.335 - 14542.750: 95.2830% ( 19) 00:23:55.621 14542.750 - 14605.166: 95.4501% ( 17) 00:23:55.621 14605.166 - 14667.581: 95.5483% ( 10) 00:23:55.621 14667.581 - 14729.996: 95.6761% ( 13) 00:23:55.621 14729.996 - 14792.411: 95.7645% ( 9) 00:23:55.621 14792.411 - 14854.827: 95.8628% ( 10) 00:23:55.621 14854.827 - 14917.242: 95.9611% ( 10) 00:23:55.621 14917.242 - 14979.657: 96.0397% ( 8) 00:23:55.621 14979.657 - 15042.072: 96.0987% ( 6) 00:23:55.621 15042.072 - 15104.488: 96.1969% ( 10) 00:23:55.621 15104.488 - 15166.903: 96.2854% ( 9) 00:23:55.621 15166.903 - 15229.318: 96.3836% ( 10) 00:23:55.621 15229.318 - 15291.733: 96.4721% ( 9) 00:23:55.621 15291.733 - 15354.149: 96.5704% ( 10) 00:23:55.621 15354.149 - 15416.564: 96.6490% ( 8) 00:23:55.621 15416.564 - 15478.979: 96.7374% ( 9) 00:23:55.621 15478.979 - 15541.394: 96.7964% ( 6) 00:23:55.621 15541.394 - 15603.810: 96.9045% ( 11) 00:23:55.621 15603.810 - 15666.225: 96.9634% ( 6) 00:23:55.621 15666.225 - 15728.640: 97.0519% ( 9) 00:23:55.621 15728.640 - 15791.055: 97.1305% ( 8) 00:23:55.621 15791.055 - 15853.470: 97.1600% ( 3) 00:23:55.621 15853.470 - 15915.886: 97.2583% ( 10) 00:23:55.621 15915.886 - 15978.301: 97.2976% ( 4) 00:23:55.621 15978.301 - 16103.131: 97.3860% ( 9) 00:23:55.621 16103.131 - 16227.962: 97.4450% ( 6) 00:23:55.621 16227.962 - 16352.792: 97.4843% ( 4) 00:23:55.621 16352.792 - 16477.623: 97.5236% ( 4) 00:23:55.621 16477.623 - 16602.453: 97.5727% ( 5) 00:23:55.621 16602.453 - 16727.284: 97.6120% ( 4) 00:23:55.621 16727.284 - 16852.114: 97.6612% ( 5) 00:23:55.621 16852.114 - 16976.945: 97.7594% ( 10) 00:23:55.621 16976.945 - 17101.775: 97.8577% ( 10) 00:23:55.621 17101.775 - 17226.606: 97.9461% ( 9) 00:23:55.621 17226.606 - 17351.436: 98.0346% ( 9) 00:23:55.621 17351.436 - 17476.267: 98.1329% ( 10) 00:23:55.621 17476.267 - 17601.097: 98.2213% ( 9) 00:23:55.621 17601.097 - 17725.928: 98.3097% ( 9) 00:23:55.621 17725.928 - 17850.758: 98.3982% ( 9) 00:23:55.621 17850.758 - 17975.589: 98.4866% ( 9) 00:23:55.621 17975.589 - 18100.419: 98.5849% ( 10) 00:23:55.621 18100.419 - 18225.250: 98.6340% ( 5) 00:23:55.621 18225.250 - 18350.080: 98.6635% ( 3) 00:23:55.621 18350.080 - 18474.910: 98.7225% ( 6) 00:23:55.621 18474.910 - 18599.741: 98.7421% ( 2) 00:23:55.621 36700.160 - 36949.821: 98.7716% ( 3) 00:23:55.621 36949.821 - 37199.482: 98.8208% ( 5) 00:23:55.621 37199.482 - 37449.143: 98.8699% ( 5) 00:23:55.621 37449.143 - 37698.804: 98.9092% ( 4) 00:23:55.621 37698.804 - 37948.465: 98.9583% ( 5) 00:23:55.621 37948.465 - 38198.126: 99.0075% ( 5) 00:23:55.621 38198.126 - 38447.787: 99.0468% ( 4) 00:23:55.621 38447.787 - 38697.448: 99.1057% ( 6) 00:23:55.621 38697.448 - 38947.109: 99.1549% ( 5) 00:23:55.621 38947.109 - 39196.770: 99.2040% ( 5) 00:23:55.621 39196.770 - 39446.430: 99.2433% ( 4) 00:23:55.621 39446.430 - 39696.091: 99.3023% ( 6) 00:23:55.621 39696.091 - 39945.752: 99.3514% ( 5) 00:23:55.621 39945.752 - 40195.413: 99.3711% ( 2) 00:23:55.621 45438.293 - 45687.954: 99.3809% ( 1) 00:23:55.621 45687.954 - 45937.615: 99.4300% ( 5) 00:23:55.621 45937.615 - 46187.276: 99.4792% ( 5) 00:23:55.621 46187.276 - 46436.937: 99.5185% ( 4) 00:23:55.621 46436.937 - 46686.598: 99.5676% ( 5) 00:23:55.621 46686.598 - 46936.259: 99.6167% ( 5) 00:23:55.621 46936.259 - 47185.920: 99.6659% ( 5) 00:23:55.621 47185.920 - 47435.581: 99.7150% ( 5) 00:23:55.621 47435.581 - 47685.242: 99.7445% ( 3) 00:23:55.621 47685.242 - 47934.903: 99.7838% ( 4) 00:23:55.621 47934.903 - 48184.564: 99.8428% ( 6) 00:23:55.621 48184.564 - 48434.225: 99.8919% ( 5) 00:23:55.621 48434.225 - 48683.886: 99.9312% ( 4) 00:23:55.621 48683.886 - 48933.547: 99.9803% ( 5) 00:23:55.621 48933.547 - 49183.208: 100.0000% ( 2) 00:23:55.621 00:23:55.621 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:23:55.621 ============================================================================== 00:23:55.621 Range in us Cumulative IO count 00:23:55.621 9050.210 - 9112.625: 0.0197% ( 2) 00:23:55.621 9112.625 - 9175.040: 0.0590% ( 4) 00:23:55.621 9175.040 - 9237.455: 0.0983% ( 4) 00:23:55.621 9237.455 - 9299.870: 0.1376% ( 4) 00:23:55.621 9299.870 - 9362.286: 0.1671% ( 3) 00:23:55.621 9362.286 - 9424.701: 0.2064% ( 4) 00:23:55.621 9424.701 - 9487.116: 0.2850% ( 8) 00:23:55.621 9487.116 - 9549.531: 0.3833% ( 10) 00:23:55.621 9549.531 - 9611.947: 0.5110% ( 13) 00:23:55.621 9611.947 - 9674.362: 0.6584% ( 15) 00:23:55.621 9674.362 - 9736.777: 0.8353% ( 18) 00:23:55.621 9736.777 - 9799.192: 0.9729% ( 14) 00:23:55.621 9799.192 - 9861.608: 1.1498% ( 18) 00:23:55.621 9861.608 - 9924.023: 1.3561% ( 21) 00:23:55.621 9924.023 - 9986.438: 1.5723% ( 22) 00:23:55.621 9986.438 - 10048.853: 1.8082% ( 24) 00:23:55.621 10048.853 - 10111.269: 2.0637% ( 26) 00:23:55.621 10111.269 - 10173.684: 2.2995% ( 24) 00:23:55.621 10173.684 - 10236.099: 2.5256% ( 23) 00:23:55.621 10236.099 - 10298.514: 2.7811% ( 26) 00:23:55.621 10298.514 - 10360.930: 3.0759% ( 30) 00:23:55.621 10360.930 - 10423.345: 3.4984% ( 43) 00:23:55.621 10423.345 - 10485.760: 3.9603% ( 47) 00:23:55.621 10485.760 - 10548.175: 4.5204% ( 57) 00:23:55.621 10548.175 - 10610.590: 5.1002% ( 59) 00:23:55.621 10610.590 - 10673.006: 5.7390% ( 65) 00:23:55.621 10673.006 - 10735.421: 6.4564% ( 73) 00:23:55.621 10735.421 - 10797.836: 7.2229% ( 78) 00:23:55.621 10797.836 - 10860.251: 8.0189% ( 81) 00:23:55.621 10860.251 - 10922.667: 8.9623% ( 96) 00:23:55.621 10922.667 - 10985.082: 9.9351% ( 99) 00:23:55.621 10985.082 - 11047.497: 11.0161% ( 110) 00:23:55.621 11047.497 - 11109.912: 12.4017% ( 141) 00:23:55.621 11109.912 - 11172.328: 14.0134% ( 164) 00:23:55.621 11172.328 - 11234.743: 15.8019% ( 182) 00:23:55.621 11234.743 - 11297.158: 17.8950% ( 213) 00:23:55.621 11297.158 - 11359.573: 20.2142% ( 236) 00:23:55.621 11359.573 - 11421.989: 22.6710% ( 250) 00:23:55.621 11421.989 - 11484.404: 25.1278% ( 250) 00:23:55.621 11484.404 - 11546.819: 27.5943% ( 251) 00:23:55.621 11546.819 - 11609.234: 30.1789% ( 263) 00:23:55.621 11609.234 - 11671.650: 32.8715% ( 274) 00:23:55.621 11671.650 - 11734.065: 35.4068% ( 258) 00:23:55.621 11734.065 - 11796.480: 37.9815% ( 262) 00:23:55.621 11796.480 - 11858.895: 40.4285% ( 249) 00:23:55.621 11858.895 - 11921.310: 42.8656% ( 248) 00:23:55.621 11921.310 - 11983.726: 45.5189% ( 270) 00:23:55.621 11983.726 - 12046.141: 48.0641% ( 259) 00:23:55.621 12046.141 - 12108.556: 50.6093% ( 259) 00:23:55.621 12108.556 - 12170.971: 53.0464% ( 248) 00:23:55.621 12170.971 - 12233.387: 55.2182% ( 221) 00:23:55.621 12233.387 - 12295.802: 57.2818% ( 210) 00:23:55.621 12295.802 - 12358.217: 59.0998% ( 185) 00:23:55.622 12358.217 - 12420.632: 60.8097% ( 174) 00:23:55.622 12420.632 - 12483.048: 62.3035% ( 152) 00:23:55.622 12483.048 - 12545.463: 63.8954% ( 162) 00:23:55.622 12545.463 - 12607.878: 65.3892% ( 152) 00:23:55.622 12607.878 - 12670.293: 66.9123% ( 155) 00:23:55.622 12670.293 - 12732.709: 68.3766% ( 149) 00:23:55.622 12732.709 - 12795.124: 69.8899% ( 154) 00:23:55.622 12795.124 - 12857.539: 71.3345% ( 147) 00:23:55.622 12857.539 - 12919.954: 72.8774% ( 157) 00:23:55.622 12919.954 - 12982.370: 74.3318% ( 148) 00:23:55.622 12982.370 - 13044.785: 75.8451% ( 154) 00:23:55.622 13044.785 - 13107.200: 77.2897% ( 147) 00:23:55.622 13107.200 - 13169.615: 78.6557% ( 139) 00:23:55.622 13169.615 - 13232.030: 80.0216% ( 139) 00:23:55.622 13232.030 - 13294.446: 81.2795% ( 128) 00:23:55.622 13294.446 - 13356.861: 82.6356% ( 138) 00:23:55.622 13356.861 - 13419.276: 83.8935% ( 128) 00:23:55.622 13419.276 - 13481.691: 85.1415% ( 127) 00:23:55.622 13481.691 - 13544.107: 86.3208% ( 120) 00:23:55.622 13544.107 - 13606.522: 87.4509% ( 115) 00:23:55.622 13606.522 - 13668.937: 88.5024% ( 107) 00:23:55.622 13668.937 - 13731.352: 89.4851% ( 100) 00:23:55.622 13731.352 - 13793.768: 90.3990% ( 93) 00:23:55.622 13793.768 - 13856.183: 91.1851% ( 80) 00:23:55.622 13856.183 - 13918.598: 91.8927% ( 72) 00:23:55.622 13918.598 - 13981.013: 92.5020% ( 62) 00:23:55.622 13981.013 - 14043.429: 92.9344% ( 44) 00:23:55.622 14043.429 - 14105.844: 93.3471% ( 42) 00:23:55.622 14105.844 - 14168.259: 93.7303% ( 39) 00:23:55.622 14168.259 - 14230.674: 94.0252% ( 30) 00:23:55.622 14230.674 - 14293.090: 94.3494% ( 33) 00:23:55.622 14293.090 - 14355.505: 94.5951% ( 25) 00:23:55.622 14355.505 - 14417.920: 94.8015% ( 21) 00:23:55.622 14417.920 - 14480.335: 94.9882% ( 19) 00:23:55.622 14480.335 - 14542.750: 95.1454% ( 16) 00:23:55.622 14542.750 - 14605.166: 95.2634% ( 12) 00:23:55.622 14605.166 - 14667.581: 95.3518% ( 9) 00:23:55.622 14667.581 - 14729.996: 95.4796% ( 13) 00:23:55.622 14729.996 - 14792.411: 95.6073% ( 13) 00:23:55.622 14792.411 - 14854.827: 95.7449% ( 14) 00:23:55.622 14854.827 - 14917.242: 95.8825% ( 14) 00:23:55.622 14917.242 - 14979.657: 96.0004% ( 12) 00:23:55.622 14979.657 - 15042.072: 96.1183% ( 12) 00:23:55.622 15042.072 - 15104.488: 96.2166% ( 10) 00:23:55.622 15104.488 - 15166.903: 96.3149% ( 10) 00:23:55.622 15166.903 - 15229.318: 96.4230% ( 11) 00:23:55.622 15229.318 - 15291.733: 96.5311% ( 11) 00:23:55.622 15291.733 - 15354.149: 96.6392% ( 11) 00:23:55.622 15354.149 - 15416.564: 96.7178% ( 8) 00:23:55.622 15416.564 - 15478.979: 96.8062% ( 9) 00:23:55.622 15478.979 - 15541.394: 96.8848% ( 8) 00:23:55.622 15541.394 - 15603.810: 96.9831% ( 10) 00:23:55.622 15603.810 - 15666.225: 97.0617% ( 8) 00:23:55.622 15666.225 - 15728.640: 97.1502% ( 9) 00:23:55.622 15728.640 - 15791.055: 97.2288% ( 8) 00:23:55.622 15791.055 - 15853.470: 97.2779% ( 5) 00:23:55.622 15853.470 - 15915.886: 97.3270% ( 5) 00:23:55.622 15915.886 - 15978.301: 97.3762% ( 5) 00:23:55.622 15978.301 - 16103.131: 97.4744% ( 10) 00:23:55.622 16103.131 - 16227.962: 97.4843% ( 1) 00:23:55.622 16602.453 - 16727.284: 97.5334% ( 5) 00:23:55.622 16727.284 - 16852.114: 97.5924% ( 6) 00:23:55.622 16852.114 - 16976.945: 97.6513% ( 6) 00:23:55.622 16976.945 - 17101.775: 97.7300% ( 8) 00:23:55.622 17101.775 - 17226.606: 97.8381% ( 11) 00:23:55.622 17226.606 - 17351.436: 97.9461% ( 11) 00:23:55.622 17351.436 - 17476.267: 98.0641% ( 12) 00:23:55.622 17476.267 - 17601.097: 98.1722% ( 11) 00:23:55.622 17601.097 - 17725.928: 98.2901% ( 12) 00:23:55.622 17725.928 - 17850.758: 98.4080% ( 12) 00:23:55.622 17850.758 - 17975.589: 98.5161% ( 11) 00:23:55.622 17975.589 - 18100.419: 98.5849% ( 7) 00:23:55.622 18100.419 - 18225.250: 98.6340% ( 5) 00:23:55.622 18225.250 - 18350.080: 98.6832% ( 5) 00:23:55.622 18350.080 - 18474.910: 98.7323% ( 5) 00:23:55.622 18474.910 - 18599.741: 98.7421% ( 1) 00:23:55.622 33454.568 - 33704.229: 98.7520% ( 1) 00:23:55.622 33704.229 - 33953.890: 98.8011% ( 5) 00:23:55.622 33953.890 - 34203.550: 98.8502% ( 5) 00:23:55.622 34203.550 - 34453.211: 98.8797% ( 3) 00:23:55.622 34453.211 - 34702.872: 98.9289% ( 5) 00:23:55.622 34702.872 - 34952.533: 98.9878% ( 6) 00:23:55.622 34952.533 - 35202.194: 99.0468% ( 6) 00:23:55.622 35202.194 - 35451.855: 99.0959% ( 5) 00:23:55.622 35451.855 - 35701.516: 99.1450% ( 5) 00:23:55.622 35701.516 - 35951.177: 99.1942% ( 5) 00:23:55.622 35951.177 - 36200.838: 99.2433% ( 5) 00:23:55.622 36200.838 - 36450.499: 99.2925% ( 5) 00:23:55.622 36450.499 - 36700.160: 99.3514% ( 6) 00:23:55.622 36700.160 - 36949.821: 99.3711% ( 2) 00:23:55.622 41943.040 - 42192.701: 99.4104% ( 4) 00:23:55.622 42192.701 - 42442.362: 99.4497% ( 4) 00:23:55.622 42442.362 - 42692.023: 99.5086% ( 6) 00:23:55.622 42692.023 - 42941.684: 99.5480% ( 4) 00:23:55.622 42941.684 - 43191.345: 99.6069% ( 6) 00:23:55.622 43191.345 - 43441.006: 99.6561% ( 5) 00:23:55.622 43441.006 - 43690.667: 99.7052% ( 5) 00:23:55.622 43690.667 - 43940.328: 99.7543% ( 5) 00:23:55.622 43940.328 - 44189.989: 99.8133% ( 6) 00:23:55.622 44189.989 - 44439.650: 99.8624% ( 5) 00:23:55.622 44439.650 - 44689.310: 99.9116% ( 5) 00:23:55.622 44689.310 - 44938.971: 99.9607% ( 5) 00:23:55.622 44938.971 - 45188.632: 100.0000% ( 4) 00:23:55.622 00:23:55.622 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:23:55.622 ============================================================================== 00:23:55.622 Range in us Cumulative IO count 00:23:55.622 8862.964 - 8925.379: 0.0197% ( 2) 00:23:55.622 8925.379 - 8987.794: 0.0590% ( 4) 00:23:55.622 8987.794 - 9050.210: 0.0983% ( 4) 00:23:55.622 9050.210 - 9112.625: 0.1278% ( 3) 00:23:55.622 9112.625 - 9175.040: 0.1671% ( 4) 00:23:55.622 9175.040 - 9237.455: 0.1965% ( 3) 00:23:55.622 9237.455 - 9299.870: 0.2358% ( 4) 00:23:55.622 9299.870 - 9362.286: 0.2752% ( 4) 00:23:55.622 9362.286 - 9424.701: 0.3439% ( 7) 00:23:55.622 9424.701 - 9487.116: 0.4226% ( 8) 00:23:55.622 9487.116 - 9549.531: 0.5405% ( 12) 00:23:55.622 9549.531 - 9611.947: 0.6388% ( 10) 00:23:55.622 9611.947 - 9674.362: 0.7665% ( 13) 00:23:55.622 9674.362 - 9736.777: 0.8943% ( 13) 00:23:55.622 9736.777 - 9799.192: 1.0417% ( 15) 00:23:55.622 9799.192 - 9861.608: 1.2382% ( 20) 00:23:55.622 9861.608 - 9924.023: 1.3954% ( 16) 00:23:55.622 9924.023 - 9986.438: 1.6313% ( 24) 00:23:55.622 9986.438 - 10048.853: 1.8278% ( 20) 00:23:55.622 10048.853 - 10111.269: 2.0637% ( 24) 00:23:55.622 10111.269 - 10173.684: 2.2995% ( 24) 00:23:55.622 10173.684 - 10236.099: 2.5649% ( 27) 00:23:55.622 10236.099 - 10298.514: 2.8695% ( 31) 00:23:55.622 10298.514 - 10360.930: 3.1643% ( 30) 00:23:55.622 10360.930 - 10423.345: 3.5574% ( 40) 00:23:55.622 10423.345 - 10485.760: 3.9701% ( 42) 00:23:55.622 10485.760 - 10548.175: 4.5401% ( 58) 00:23:55.622 10548.175 - 10610.590: 5.1101% ( 58) 00:23:55.622 10610.590 - 10673.006: 5.7193% ( 62) 00:23:55.622 10673.006 - 10735.421: 6.5055% ( 80) 00:23:55.622 10735.421 - 10797.836: 7.3703% ( 88) 00:23:55.622 10797.836 - 10860.251: 8.3333% ( 98) 00:23:55.622 10860.251 - 10922.667: 9.2276% ( 91) 00:23:55.622 10922.667 - 10985.082: 10.2201% ( 101) 00:23:55.622 10985.082 - 11047.497: 11.4583% ( 126) 00:23:55.622 11047.497 - 11109.912: 12.7752% ( 134) 00:23:55.622 11109.912 - 11172.328: 14.4458% ( 170) 00:23:55.622 11172.328 - 11234.743: 16.3227% ( 191) 00:23:55.622 11234.743 - 11297.158: 18.4552% ( 217) 00:23:55.623 11297.158 - 11359.573: 20.6859% ( 227) 00:23:55.623 11359.573 - 11421.989: 23.1034% ( 246) 00:23:55.623 11421.989 - 11484.404: 25.6879% ( 263) 00:23:55.623 11484.404 - 11546.819: 28.3412% ( 270) 00:23:55.623 11546.819 - 11609.234: 30.8667% ( 257) 00:23:55.623 11609.234 - 11671.650: 33.5004% ( 268) 00:23:55.623 11671.650 - 11734.065: 36.1046% ( 265) 00:23:55.623 11734.065 - 11796.480: 38.6006% ( 254) 00:23:55.623 11796.480 - 11858.895: 41.0672% ( 251) 00:23:55.623 11858.895 - 11921.310: 43.5731% ( 255) 00:23:55.623 11921.310 - 11983.726: 46.1085% ( 258) 00:23:55.623 11983.726 - 12046.141: 48.6832% ( 262) 00:23:55.623 12046.141 - 12108.556: 51.2284% ( 259) 00:23:55.623 12108.556 - 12170.971: 53.6065% ( 242) 00:23:55.623 12170.971 - 12233.387: 55.7685% ( 220) 00:23:55.623 12233.387 - 12295.802: 57.7142% ( 198) 00:23:55.623 12295.802 - 12358.217: 59.5126% ( 183) 00:23:55.623 12358.217 - 12420.632: 61.1733% ( 169) 00:23:55.623 12420.632 - 12483.048: 62.7162% ( 157) 00:23:55.623 12483.048 - 12545.463: 64.0920% ( 140) 00:23:55.623 12545.463 - 12607.878: 65.5071% ( 144) 00:23:55.623 12607.878 - 12670.293: 66.9713% ( 149) 00:23:55.623 12670.293 - 12732.709: 68.3766% ( 143) 00:23:55.623 12732.709 - 12795.124: 69.8310% ( 148) 00:23:55.623 12795.124 - 12857.539: 71.3542% ( 155) 00:23:55.623 12857.539 - 12919.954: 72.8479% ( 152) 00:23:55.623 12919.954 - 12982.370: 74.3907% ( 157) 00:23:55.623 12982.370 - 13044.785: 75.8844% ( 152) 00:23:55.623 13044.785 - 13107.200: 77.3683% ( 151) 00:23:55.623 13107.200 - 13169.615: 78.7244% ( 138) 00:23:55.623 13169.615 - 13232.030: 80.1395% ( 144) 00:23:55.623 13232.030 - 13294.446: 81.3679% ( 125) 00:23:55.623 13294.446 - 13356.861: 82.6454% ( 130) 00:23:55.623 13356.861 - 13419.276: 83.9721% ( 135) 00:23:55.623 13419.276 - 13481.691: 85.1513% ( 120) 00:23:55.623 13481.691 - 13544.107: 86.3011% ( 117) 00:23:55.623 13544.107 - 13606.522: 87.3624% ( 108) 00:23:55.623 13606.522 - 13668.937: 88.3746% ( 103) 00:23:55.623 13668.937 - 13731.352: 89.3180% ( 96) 00:23:55.623 13731.352 - 13793.768: 90.1926% ( 89) 00:23:55.623 13793.768 - 13856.183: 90.9296% ( 75) 00:23:55.623 13856.183 - 13918.598: 91.6077% ( 69) 00:23:55.623 13918.598 - 13981.013: 92.1384% ( 54) 00:23:55.623 13981.013 - 14043.429: 92.6002% ( 47) 00:23:55.623 14043.429 - 14105.844: 93.0031% ( 41) 00:23:55.623 14105.844 - 14168.259: 93.3569% ( 36) 00:23:55.623 14168.259 - 14230.674: 93.6222% ( 27) 00:23:55.623 14230.674 - 14293.090: 93.8679% ( 25) 00:23:55.623 14293.090 - 14355.505: 94.1234% ( 26) 00:23:55.623 14355.505 - 14417.920: 94.3494% ( 23) 00:23:55.623 14417.920 - 14480.335: 94.5558% ( 21) 00:23:55.623 14480.335 - 14542.750: 94.6737% ( 12) 00:23:55.623 14542.750 - 14605.166: 94.7720% ( 10) 00:23:55.623 14605.166 - 14667.581: 94.8801% ( 11) 00:23:55.623 14667.581 - 14729.996: 95.0177% ( 14) 00:23:55.623 14729.996 - 14792.411: 95.1651% ( 15) 00:23:55.623 14792.411 - 14854.827: 95.3322% ( 17) 00:23:55.623 14854.827 - 14917.242: 95.4796% ( 15) 00:23:55.623 14917.242 - 14979.657: 95.6270% ( 15) 00:23:55.623 14979.657 - 15042.072: 95.7940% ( 17) 00:23:55.623 15042.072 - 15104.488: 95.9513% ( 16) 00:23:55.623 15104.488 - 15166.903: 96.1183% ( 17) 00:23:55.623 15166.903 - 15229.318: 96.2657% ( 15) 00:23:55.623 15229.318 - 15291.733: 96.4230% ( 16) 00:23:55.623 15291.733 - 15354.149: 96.5704% ( 15) 00:23:55.623 15354.149 - 15416.564: 96.6981% ( 13) 00:23:55.623 15416.564 - 15478.979: 96.8062% ( 11) 00:23:55.623 15478.979 - 15541.394: 96.9045% ( 10) 00:23:55.623 15541.394 - 15603.810: 96.9929% ( 9) 00:23:55.623 15603.810 - 15666.225: 97.0715% ( 8) 00:23:55.623 15666.225 - 15728.640: 97.1600% ( 9) 00:23:55.623 15728.640 - 15791.055: 97.2386% ( 8) 00:23:55.623 15791.055 - 15853.470: 97.3074% ( 7) 00:23:55.623 15853.470 - 15915.886: 97.3762% ( 7) 00:23:55.623 15915.886 - 15978.301: 97.4351% ( 6) 00:23:55.623 15978.301 - 16103.131: 97.4843% ( 5) 00:23:55.623 16602.453 - 16727.284: 97.5138% ( 3) 00:23:55.623 16727.284 - 16852.114: 97.5629% ( 5) 00:23:55.623 16852.114 - 16976.945: 97.6120% ( 5) 00:23:55.623 16976.945 - 17101.775: 97.6710% ( 6) 00:23:55.623 17101.775 - 17226.606: 97.7398% ( 7) 00:23:55.623 17226.606 - 17351.436: 97.8577% ( 12) 00:23:55.623 17351.436 - 17476.267: 97.9658% ( 11) 00:23:55.623 17476.267 - 17601.097: 98.0739% ( 11) 00:23:55.623 17601.097 - 17725.928: 98.1918% ( 12) 00:23:55.623 17725.928 - 17850.758: 98.3097% ( 12) 00:23:55.623 17850.758 - 17975.589: 98.4080% ( 10) 00:23:55.623 17975.589 - 18100.419: 98.5259% ( 12) 00:23:55.623 18100.419 - 18225.250: 98.5849% ( 6) 00:23:55.623 18225.250 - 18350.080: 98.6439% ( 6) 00:23:55.623 18350.080 - 18474.910: 98.6930% ( 5) 00:23:55.623 18474.910 - 18599.741: 98.7421% ( 5) 00:23:55.623 30208.975 - 30333.806: 98.7618% ( 2) 00:23:55.623 30333.806 - 30458.636: 98.7913% ( 3) 00:23:55.623 30458.636 - 30583.467: 98.8109% ( 2) 00:23:55.623 30583.467 - 30708.297: 98.8404% ( 3) 00:23:55.623 30708.297 - 30833.128: 98.8601% ( 2) 00:23:55.623 30833.128 - 30957.958: 98.8797% ( 2) 00:23:55.623 30957.958 - 31082.789: 98.9092% ( 3) 00:23:55.623 31082.789 - 31207.619: 98.9387% ( 3) 00:23:55.623 31207.619 - 31332.450: 98.9583% ( 2) 00:23:55.623 31332.450 - 31457.280: 98.9878% ( 3) 00:23:55.623 31457.280 - 31582.110: 99.0173% ( 3) 00:23:55.623 31582.110 - 31706.941: 99.0369% ( 2) 00:23:55.623 31706.941 - 31831.771: 99.0664% ( 3) 00:23:55.623 31831.771 - 31956.602: 99.0959% ( 3) 00:23:55.623 31956.602 - 32206.263: 99.1352% ( 4) 00:23:55.623 32206.263 - 32455.924: 99.1942% ( 6) 00:23:55.623 32455.924 - 32705.585: 99.2433% ( 5) 00:23:55.623 32705.585 - 32955.246: 99.2925% ( 5) 00:23:55.623 32955.246 - 33204.907: 99.3514% ( 6) 00:23:55.623 33204.907 - 33454.568: 99.3711% ( 2) 00:23:55.623 38447.787 - 38697.448: 99.3907% ( 2) 00:23:55.623 38697.448 - 38947.109: 99.4300% ( 4) 00:23:55.623 38947.109 - 39196.770: 99.4890% ( 6) 00:23:55.623 39196.770 - 39446.430: 99.5283% ( 4) 00:23:55.623 39446.430 - 39696.091: 99.5774% ( 5) 00:23:55.623 39696.091 - 39945.752: 99.6364% ( 6) 00:23:55.623 39945.752 - 40195.413: 99.6757% ( 4) 00:23:55.623 40195.413 - 40445.074: 99.7248% ( 5) 00:23:55.623 40445.074 - 40694.735: 99.7740% ( 5) 00:23:55.623 40694.735 - 40944.396: 99.8329% ( 6) 00:23:55.623 40944.396 - 41194.057: 99.8821% ( 5) 00:23:55.623 41194.057 - 41443.718: 99.9312% ( 5) 00:23:55.623 41443.718 - 41693.379: 99.9902% ( 6) 00:23:55.623 41693.379 - 41943.040: 100.0000% ( 1) 00:23:55.623 00:23:55.623 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:23:55.623 ============================================================================== 00:23:55.623 Range in us Cumulative IO count 00:23:55.623 8862.964 - 8925.379: 0.0295% ( 3) 00:23:55.623 8925.379 - 8987.794: 0.0688% ( 4) 00:23:55.623 8987.794 - 9050.210: 0.1081% ( 4) 00:23:55.623 9050.210 - 9112.625: 0.1376% ( 3) 00:23:55.623 9112.625 - 9175.040: 0.1769% ( 4) 00:23:55.623 9175.040 - 9237.455: 0.2358% ( 6) 00:23:55.623 9237.455 - 9299.870: 0.3046% ( 7) 00:23:55.623 9299.870 - 9362.286: 0.4029% ( 10) 00:23:55.623 9362.286 - 9424.701: 0.5012% ( 10) 00:23:55.623 9424.701 - 9487.116: 0.5896% ( 9) 00:23:55.623 9487.116 - 9549.531: 0.6781% ( 9) 00:23:55.623 9549.531 - 9611.947: 0.7763% ( 10) 00:23:55.623 9611.947 - 9674.362: 0.9041% ( 13) 00:23:55.623 9674.362 - 9736.777: 1.0318% ( 13) 00:23:55.623 9736.777 - 9799.192: 1.1891% ( 16) 00:23:55.623 9799.192 - 9861.608: 1.3463% ( 16) 00:23:55.623 9861.608 - 9924.023: 1.4937% ( 15) 00:23:55.623 9924.023 - 9986.438: 1.6509% ( 16) 00:23:55.623 9986.438 - 10048.853: 1.8377% ( 19) 00:23:55.624 10048.853 - 10111.269: 2.0244% ( 19) 00:23:55.624 10111.269 - 10173.684: 2.2602% ( 24) 00:23:55.624 10173.684 - 10236.099: 2.5550% ( 30) 00:23:55.624 10236.099 - 10298.514: 2.9088% ( 36) 00:23:55.624 10298.514 - 10360.930: 3.2724% ( 37) 00:23:55.624 10360.930 - 10423.345: 3.6458% ( 38) 00:23:55.624 10423.345 - 10485.760: 4.0782% ( 44) 00:23:55.624 10485.760 - 10548.175: 4.5401% ( 47) 00:23:55.624 10548.175 - 10610.590: 5.0609% ( 53) 00:23:55.624 10610.590 - 10673.006: 5.7095% ( 66) 00:23:55.624 10673.006 - 10735.421: 6.4269% ( 73) 00:23:55.624 10735.421 - 10797.836: 7.2622% ( 85) 00:23:55.624 10797.836 - 10860.251: 8.0582% ( 81) 00:23:55.624 10860.251 - 10922.667: 8.9917% ( 95) 00:23:55.624 10922.667 - 10985.082: 10.0727% ( 110) 00:23:55.624 10985.082 - 11047.497: 11.2913% ( 124) 00:23:55.624 11047.497 - 11109.912: 12.7358% ( 147) 00:23:55.624 11109.912 - 11172.328: 14.4752% ( 177) 00:23:55.624 11172.328 - 11234.743: 16.4898% ( 205) 00:23:55.624 11234.743 - 11297.158: 18.6321% ( 218) 00:23:55.624 11297.158 - 11359.573: 20.8137% ( 222) 00:23:55.624 11359.573 - 11421.989: 23.2606% ( 249) 00:23:55.624 11421.989 - 11484.404: 25.7862% ( 257) 00:23:55.624 11484.404 - 11546.819: 28.3019% ( 256) 00:23:55.624 11546.819 - 11609.234: 30.7881% ( 253) 00:23:55.624 11609.234 - 11671.650: 33.3432% ( 260) 00:23:55.624 11671.650 - 11734.065: 35.9178% ( 262) 00:23:55.624 11734.065 - 11796.480: 38.4139% ( 254) 00:23:55.624 11796.480 - 11858.895: 40.8510% ( 248) 00:23:55.624 11858.895 - 11921.310: 43.2390% ( 243) 00:23:55.624 11921.310 - 11983.726: 45.7645% ( 257) 00:23:55.624 11983.726 - 12046.141: 48.3491% ( 263) 00:23:55.624 12046.141 - 12108.556: 50.7567% ( 245) 00:23:55.624 12108.556 - 12170.971: 53.0857% ( 237) 00:23:55.624 12170.971 - 12233.387: 55.1592% ( 211) 00:23:55.624 12233.387 - 12295.802: 57.0755% ( 195) 00:23:55.624 12295.802 - 12358.217: 58.7264% ( 168) 00:23:55.624 12358.217 - 12420.632: 60.3479% ( 165) 00:23:55.624 12420.632 - 12483.048: 61.7925% ( 147) 00:23:55.624 12483.048 - 12545.463: 63.3550% ( 159) 00:23:55.624 12545.463 - 12607.878: 64.8094% ( 148) 00:23:55.624 12607.878 - 12670.293: 66.4210% ( 164) 00:23:55.624 12670.293 - 12732.709: 67.8164% ( 142) 00:23:55.624 12732.709 - 12795.124: 69.3494% ( 156) 00:23:55.624 12795.124 - 12857.539: 70.9414% ( 162) 00:23:55.624 12857.539 - 12919.954: 72.5727% ( 166) 00:23:55.624 12919.954 - 12982.370: 74.1352% ( 159) 00:23:55.624 12982.370 - 13044.785: 75.7174% ( 161) 00:23:55.624 13044.785 - 13107.200: 77.3290% ( 164) 00:23:55.624 13107.200 - 13169.615: 78.8620% ( 156) 00:23:55.624 13169.615 - 13232.030: 80.2476% ( 141) 00:23:55.624 13232.030 - 13294.446: 81.6038% ( 138) 00:23:55.624 13294.446 - 13356.861: 82.9108% ( 133) 00:23:55.624 13356.861 - 13419.276: 84.0900% ( 120) 00:23:55.624 13419.276 - 13481.691: 85.2594% ( 119) 00:23:55.624 13481.691 - 13544.107: 86.4485% ( 121) 00:23:55.624 13544.107 - 13606.522: 87.5295% ( 110) 00:23:55.624 13606.522 - 13668.937: 88.5515% ( 104) 00:23:55.624 13668.937 - 13731.352: 89.4556% ( 92) 00:23:55.624 13731.352 - 13793.768: 90.3498% ( 91) 00:23:55.624 13793.768 - 13856.183: 91.0869% ( 75) 00:23:55.624 13856.183 - 13918.598: 91.7551% ( 68) 00:23:55.624 13918.598 - 13981.013: 92.3054% ( 56) 00:23:55.624 13981.013 - 14043.429: 92.6887% ( 39) 00:23:55.624 14043.429 - 14105.844: 93.0621% ( 38) 00:23:55.624 14105.844 - 14168.259: 93.3962% ( 34) 00:23:55.624 14168.259 - 14230.674: 93.6910% ( 30) 00:23:55.624 14230.674 - 14293.090: 93.9760% ( 29) 00:23:55.624 14293.090 - 14355.505: 94.2217% ( 25) 00:23:55.624 14355.505 - 14417.920: 94.4182% ( 20) 00:23:55.624 14417.920 - 14480.335: 94.5755% ( 16) 00:23:55.624 14480.335 - 14542.750: 94.7032% ( 13) 00:23:55.624 14542.750 - 14605.166: 94.8211% ( 12) 00:23:55.624 14605.166 - 14667.581: 94.9391% ( 12) 00:23:55.624 14667.581 - 14729.996: 95.0570% ( 12) 00:23:55.624 14729.996 - 14792.411: 95.2142% ( 16) 00:23:55.624 14792.411 - 14854.827: 95.3616% ( 15) 00:23:55.624 14854.827 - 14917.242: 95.5287% ( 17) 00:23:55.624 14917.242 - 14979.657: 95.6859% ( 16) 00:23:55.624 14979.657 - 15042.072: 95.8530% ( 17) 00:23:55.624 15042.072 - 15104.488: 96.0004% ( 15) 00:23:55.624 15104.488 - 15166.903: 96.1576% ( 16) 00:23:55.624 15166.903 - 15229.318: 96.3247% ( 17) 00:23:55.624 15229.318 - 15291.733: 96.4623% ( 14) 00:23:55.624 15291.733 - 15354.149: 96.6097% ( 15) 00:23:55.624 15354.149 - 15416.564: 96.7374% ( 13) 00:23:55.624 15416.564 - 15478.979: 96.8160% ( 8) 00:23:55.624 15478.979 - 15541.394: 96.8947% ( 8) 00:23:55.624 15541.394 - 15603.810: 96.9831% ( 9) 00:23:55.624 15603.810 - 15666.225: 97.0519% ( 7) 00:23:55.624 15666.225 - 15728.640: 97.1305% ( 8) 00:23:55.624 15728.640 - 15791.055: 97.2189% ( 9) 00:23:55.624 15791.055 - 15853.470: 97.2779% ( 6) 00:23:55.624 15853.470 - 15915.886: 97.3369% ( 6) 00:23:55.624 15915.886 - 15978.301: 97.3958% ( 6) 00:23:55.624 15978.301 - 16103.131: 97.4843% ( 9) 00:23:55.624 16602.453 - 16727.284: 97.4941% ( 1) 00:23:55.624 16727.284 - 16852.114: 97.5531% ( 6) 00:23:55.624 16852.114 - 16976.945: 97.6022% ( 5) 00:23:55.624 16976.945 - 17101.775: 97.6513% ( 5) 00:23:55.624 17101.775 - 17226.606: 97.7103% ( 6) 00:23:55.624 17226.606 - 17351.436: 97.7693% ( 6) 00:23:55.624 17351.436 - 17476.267: 97.8675% ( 10) 00:23:55.624 17476.267 - 17601.097: 97.9953% ( 13) 00:23:55.624 17601.097 - 17725.928: 98.1230% ( 13) 00:23:55.624 17725.928 - 17850.758: 98.2410% ( 12) 00:23:55.624 17850.758 - 17975.589: 98.3491% ( 11) 00:23:55.624 17975.589 - 18100.419: 98.4768% ( 13) 00:23:55.624 18100.419 - 18225.250: 98.5653% ( 9) 00:23:55.624 18225.250 - 18350.080: 98.6242% ( 6) 00:23:55.624 18350.080 - 18474.910: 98.6733% ( 5) 00:23:55.624 18474.910 - 18599.741: 98.7323% ( 6) 00:23:55.624 18599.741 - 18724.571: 98.7421% ( 1) 00:23:55.624 26214.400 - 26339.230: 98.7520% ( 1) 00:23:55.624 26339.230 - 26464.061: 98.7913% ( 4) 00:23:55.624 26464.061 - 26588.891: 98.8109% ( 2) 00:23:55.624 26588.891 - 26713.722: 98.8306% ( 2) 00:23:55.624 26713.722 - 26838.552: 98.8601% ( 3) 00:23:55.624 26838.552 - 26963.383: 98.8797% ( 2) 00:23:55.624 26963.383 - 27088.213: 98.9092% ( 3) 00:23:55.624 27088.213 - 27213.044: 98.9387% ( 3) 00:23:55.624 27213.044 - 27337.874: 98.9682% ( 3) 00:23:55.624 27337.874 - 27462.705: 98.9878% ( 2) 00:23:55.624 27462.705 - 27587.535: 99.0173% ( 3) 00:23:55.624 27587.535 - 27712.366: 99.0468% ( 3) 00:23:55.624 27712.366 - 27837.196: 99.0664% ( 2) 00:23:55.624 27837.196 - 27962.027: 99.0861% ( 2) 00:23:55.624 27962.027 - 28086.857: 99.1156% ( 3) 00:23:55.624 28086.857 - 28211.688: 99.1450% ( 3) 00:23:55.624 28211.688 - 28336.518: 99.1647% ( 2) 00:23:55.624 28336.518 - 28461.349: 99.1942% ( 3) 00:23:55.624 28461.349 - 28586.179: 99.2237% ( 3) 00:23:55.624 28586.179 - 28711.010: 99.2433% ( 2) 00:23:55.624 28711.010 - 28835.840: 99.2728% ( 3) 00:23:55.624 28835.840 - 28960.670: 99.2925% ( 2) 00:23:55.624 28960.670 - 29085.501: 99.3219% ( 3) 00:23:55.624 29085.501 - 29210.331: 99.3514% ( 3) 00:23:55.624 29210.331 - 29335.162: 99.3711% ( 2) 00:23:55.624 34453.211 - 34702.872: 99.4006% ( 3) 00:23:55.624 34702.872 - 34952.533: 99.4497% ( 5) 00:23:55.624 34952.533 - 35202.194: 99.5086% ( 6) 00:23:55.624 35202.194 - 35451.855: 99.5578% ( 5) 00:23:55.624 35451.855 - 35701.516: 99.6167% ( 6) 00:23:55.624 35701.516 - 35951.177: 99.6561% ( 4) 00:23:55.624 35951.177 - 36200.838: 99.7052% ( 5) 00:23:55.624 36200.838 - 36450.499: 99.7543% ( 5) 00:23:55.624 36450.499 - 36700.160: 99.8133% ( 6) 00:23:55.624 36700.160 - 36949.821: 99.8526% ( 4) 00:23:55.624 36949.821 - 37199.482: 99.8821% ( 3) 00:23:55.624 37199.482 - 37449.143: 99.9312% ( 5) 00:23:55.624 37449.143 - 37698.804: 99.9902% ( 6) 00:23:55.624 37698.804 - 37948.465: 100.0000% ( 1) 00:23:55.624 00:23:55.624 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:23:55.624 ============================================================================== 00:23:55.624 Range in us Cumulative IO count 00:23:55.624 9050.210 - 9112.625: 0.0197% ( 2) 00:23:55.624 9112.625 - 9175.040: 0.0786% ( 6) 00:23:55.624 9175.040 - 9237.455: 0.1671% ( 9) 00:23:55.624 9237.455 - 9299.870: 0.2555% ( 9) 00:23:55.624 9299.870 - 9362.286: 0.3636% ( 11) 00:23:55.624 9362.286 - 9424.701: 0.4422% ( 8) 00:23:55.624 9424.701 - 9487.116: 0.5503% ( 11) 00:23:55.624 9487.116 - 9549.531: 0.6584% ( 11) 00:23:55.624 9549.531 - 9611.947: 0.7469% ( 9) 00:23:55.624 9611.947 - 9674.362: 0.8844% ( 14) 00:23:55.624 9674.362 - 9736.777: 1.0122% ( 13) 00:23:55.624 9736.777 - 9799.192: 1.1694% ( 16) 00:23:55.624 9799.192 - 9861.608: 1.2972% ( 13) 00:23:55.624 9861.608 - 9924.023: 1.4446% ( 15) 00:23:55.624 9924.023 - 9986.438: 1.6018% ( 16) 00:23:55.624 9986.438 - 10048.853: 1.8082% ( 21) 00:23:55.624 10048.853 - 10111.269: 2.0440% ( 24) 00:23:55.624 10111.269 - 10173.684: 2.3487% ( 31) 00:23:55.624 10173.684 - 10236.099: 2.6336% ( 29) 00:23:55.624 10236.099 - 10298.514: 2.9678% ( 34) 00:23:55.624 10298.514 - 10360.930: 3.2724% ( 31) 00:23:55.625 10360.930 - 10423.345: 3.5869% ( 32) 00:23:55.625 10423.345 - 10485.760: 3.9505% ( 37) 00:23:55.625 10485.760 - 10548.175: 4.3337% ( 39) 00:23:55.625 10548.175 - 10610.590: 4.8546% ( 53) 00:23:55.625 10610.590 - 10673.006: 5.4442% ( 60) 00:23:55.625 10673.006 - 10735.421: 6.1714% ( 74) 00:23:55.625 10735.421 - 10797.836: 6.9379% ( 78) 00:23:55.625 10797.836 - 10860.251: 7.7928% ( 87) 00:23:55.625 10860.251 - 10922.667: 8.7657% ( 99) 00:23:55.625 10922.667 - 10985.082: 9.8172% ( 107) 00:23:55.625 10985.082 - 11047.497: 11.1144% ( 132) 00:23:55.625 11047.497 - 11109.912: 12.5983% ( 151) 00:23:55.625 11109.912 - 11172.328: 14.4458% ( 188) 00:23:55.625 11172.328 - 11234.743: 16.3817% ( 197) 00:23:55.625 11234.743 - 11297.158: 18.5338% ( 219) 00:23:55.625 11297.158 - 11359.573: 20.7154% ( 222) 00:23:55.625 11359.573 - 11421.989: 22.9953% ( 232) 00:23:55.625 11421.989 - 11484.404: 25.4520% ( 250) 00:23:55.625 11484.404 - 11546.819: 27.9088% ( 250) 00:23:55.625 11546.819 - 11609.234: 30.3754% ( 251) 00:23:55.625 11609.234 - 11671.650: 32.8715% ( 254) 00:23:55.625 11671.650 - 11734.065: 35.3282% ( 250) 00:23:55.625 11734.065 - 11796.480: 37.7752% ( 249) 00:23:55.625 11796.480 - 11858.895: 40.2909% ( 256) 00:23:55.625 11858.895 - 11921.310: 42.6690% ( 242) 00:23:55.625 11921.310 - 11983.726: 45.0472% ( 242) 00:23:55.625 11983.726 - 12046.141: 47.3369% ( 233) 00:23:55.625 12046.141 - 12108.556: 49.7642% ( 247) 00:23:55.625 12108.556 - 12170.971: 52.0539% ( 233) 00:23:55.625 12170.971 - 12233.387: 54.3632% ( 235) 00:23:55.625 12233.387 - 12295.802: 56.3286% ( 200) 00:23:55.625 12295.802 - 12358.217: 58.0582% ( 176) 00:23:55.625 12358.217 - 12420.632: 59.6600% ( 163) 00:23:55.625 12420.632 - 12483.048: 61.2028% ( 157) 00:23:55.625 12483.048 - 12545.463: 62.7555% ( 158) 00:23:55.625 12545.463 - 12607.878: 64.2885% ( 156) 00:23:55.625 12607.878 - 12670.293: 65.7528% ( 149) 00:23:55.625 12670.293 - 12732.709: 67.1875% ( 146) 00:23:55.625 12732.709 - 12795.124: 68.7402% ( 158) 00:23:55.625 12795.124 - 12857.539: 70.2044% ( 149) 00:23:55.625 12857.539 - 12919.954: 71.8160% ( 164) 00:23:55.625 12919.954 - 12982.370: 73.4572% ( 167) 00:23:55.625 12982.370 - 13044.785: 75.0786% ( 165) 00:23:55.625 13044.785 - 13107.200: 76.6706% ( 162) 00:23:55.625 13107.200 - 13169.615: 78.1741% ( 153) 00:23:55.625 13169.615 - 13232.030: 79.7563% ( 161) 00:23:55.625 13232.030 - 13294.446: 81.2009% ( 147) 00:23:55.625 13294.446 - 13356.861: 82.5472% ( 137) 00:23:55.625 13356.861 - 13419.276: 83.8247% ( 130) 00:23:55.625 13419.276 - 13481.691: 85.0629% ( 126) 00:23:55.625 13481.691 - 13544.107: 86.2520% ( 121) 00:23:55.625 13544.107 - 13606.522: 87.4509% ( 122) 00:23:55.625 13606.522 - 13668.937: 88.5417% ( 111) 00:23:55.625 13668.937 - 13731.352: 89.5833% ( 106) 00:23:55.625 13731.352 - 13793.768: 90.6152% ( 105) 00:23:55.625 13793.768 - 13856.183: 91.4406% ( 84) 00:23:55.625 13856.183 - 13918.598: 92.1777% ( 75) 00:23:55.625 13918.598 - 13981.013: 92.7673% ( 60) 00:23:55.625 13981.013 - 14043.429: 93.2488% ( 49) 00:23:55.625 14043.429 - 14105.844: 93.6026% ( 36) 00:23:55.625 14105.844 - 14168.259: 93.9465% ( 35) 00:23:55.625 14168.259 - 14230.674: 94.1922% ( 25) 00:23:55.625 14230.674 - 14293.090: 94.3888% ( 20) 00:23:55.625 14293.090 - 14355.505: 94.6148% ( 23) 00:23:55.625 14355.505 - 14417.920: 94.8310% ( 22) 00:23:55.625 14417.920 - 14480.335: 94.9980% ( 17) 00:23:55.625 14480.335 - 14542.750: 95.1454% ( 15) 00:23:55.625 14542.750 - 14605.166: 95.2732% ( 13) 00:23:55.625 14605.166 - 14667.581: 95.3518% ( 8) 00:23:55.625 14667.581 - 14729.996: 95.4796% ( 13) 00:23:55.625 14729.996 - 14792.411: 95.6171% ( 14) 00:23:55.625 14792.411 - 14854.827: 95.7645% ( 15) 00:23:55.625 14854.827 - 14917.242: 95.8825% ( 12) 00:23:55.625 14917.242 - 14979.657: 96.0004% ( 12) 00:23:55.625 14979.657 - 15042.072: 96.0888% ( 9) 00:23:55.625 15042.072 - 15104.488: 96.1969% ( 11) 00:23:55.625 15104.488 - 15166.903: 96.3050% ( 11) 00:23:55.625 15166.903 - 15229.318: 96.4131% ( 11) 00:23:55.625 15229.318 - 15291.733: 96.5311% ( 12) 00:23:55.625 15291.733 - 15354.149: 96.6293% ( 10) 00:23:55.625 15354.149 - 15416.564: 96.7276% ( 10) 00:23:55.625 15416.564 - 15478.979: 96.8062% ( 8) 00:23:55.625 15478.979 - 15541.394: 96.8947% ( 9) 00:23:55.625 15541.394 - 15603.810: 96.9831% ( 9) 00:23:55.625 15603.810 - 15666.225: 97.0617% ( 8) 00:23:55.625 15666.225 - 15728.640: 97.1502% ( 9) 00:23:55.625 15728.640 - 15791.055: 97.2288% ( 8) 00:23:55.625 15791.055 - 15853.470: 97.2976% ( 7) 00:23:55.625 15853.470 - 15915.886: 97.3565% ( 6) 00:23:55.625 15915.886 - 15978.301: 97.4057% ( 5) 00:23:55.625 15978.301 - 16103.131: 97.4843% ( 8) 00:23:55.625 16227.962 - 16352.792: 97.4941% ( 1) 00:23:55.625 16352.792 - 16477.623: 97.5334% ( 4) 00:23:55.625 16477.623 - 16602.453: 97.5924% ( 6) 00:23:55.625 16602.453 - 16727.284: 97.6415% ( 5) 00:23:55.625 16727.284 - 16852.114: 97.7103% ( 7) 00:23:55.625 16852.114 - 16976.945: 97.8282% ( 12) 00:23:55.625 16976.945 - 17101.775: 97.9461% ( 12) 00:23:55.625 17101.775 - 17226.606: 98.0641% ( 12) 00:23:55.625 17226.606 - 17351.436: 98.1820% ( 12) 00:23:55.625 17351.436 - 17476.267: 98.2803% ( 10) 00:23:55.625 17476.267 - 17601.097: 98.4080% ( 13) 00:23:55.625 17601.097 - 17725.928: 98.5161% ( 11) 00:23:55.625 17725.928 - 17850.758: 98.5751% ( 6) 00:23:55.625 17850.758 - 17975.589: 98.6242% ( 5) 00:23:55.625 17975.589 - 18100.419: 98.6635% ( 4) 00:23:55.625 18100.419 - 18225.250: 98.7225% ( 6) 00:23:55.625 18225.250 - 18350.080: 98.7421% ( 2) 00:23:55.625 22219.825 - 22344.655: 98.7618% ( 2) 00:23:55.625 22344.655 - 22469.486: 98.7814% ( 2) 00:23:55.625 22469.486 - 22594.316: 98.8109% ( 3) 00:23:55.625 22594.316 - 22719.147: 98.8306% ( 2) 00:23:55.625 22719.147 - 22843.977: 98.8502% ( 2) 00:23:55.625 22843.977 - 22968.808: 98.8797% ( 3) 00:23:55.625 22968.808 - 23093.638: 98.8994% ( 2) 00:23:55.625 23093.638 - 23218.469: 98.9289% ( 3) 00:23:55.625 23218.469 - 23343.299: 98.9583% ( 3) 00:23:55.626 23343.299 - 23468.130: 98.9780% ( 2) 00:23:55.626 23468.130 - 23592.960: 99.0075% ( 3) 00:23:55.626 23592.960 - 23717.790: 99.0271% ( 2) 00:23:55.626 23717.790 - 23842.621: 99.0566% ( 3) 00:23:55.626 23842.621 - 23967.451: 99.0861% ( 3) 00:23:55.626 23967.451 - 24092.282: 99.1057% ( 2) 00:23:55.626 24092.282 - 24217.112: 99.1352% ( 3) 00:23:55.626 24217.112 - 24341.943: 99.1647% ( 3) 00:23:55.626 24341.943 - 24466.773: 99.1844% ( 2) 00:23:55.626 24466.773 - 24591.604: 99.2138% ( 3) 00:23:55.626 24591.604 - 24716.434: 99.2433% ( 3) 00:23:55.626 24716.434 - 24841.265: 99.2630% ( 2) 00:23:55.626 24841.265 - 24966.095: 99.2826% ( 2) 00:23:55.626 24966.095 - 25090.926: 99.3023% ( 2) 00:23:55.626 25090.926 - 25215.756: 99.3318% ( 3) 00:23:55.626 25215.756 - 25340.587: 99.3612% ( 3) 00:23:55.626 25340.587 - 25465.417: 99.3711% ( 1) 00:23:55.626 30583.467 - 30708.297: 99.3809% ( 1) 00:23:55.626 30708.297 - 30833.128: 99.4006% ( 2) 00:23:55.626 30833.128 - 30957.958: 99.4300% ( 3) 00:23:55.626 30957.958 - 31082.789: 99.4595% ( 3) 00:23:55.626 31082.789 - 31207.619: 99.4792% ( 2) 00:23:55.626 31207.619 - 31332.450: 99.5086% ( 3) 00:23:55.626 31332.450 - 31457.280: 99.5283% ( 2) 00:23:55.626 31457.280 - 31582.110: 99.5480% ( 2) 00:23:55.626 31582.110 - 31706.941: 99.5676% ( 2) 00:23:55.626 31706.941 - 31831.771: 99.5971% ( 3) 00:23:55.626 31831.771 - 31956.602: 99.6266% ( 3) 00:23:55.626 31956.602 - 32206.263: 99.6757% ( 5) 00:23:55.626 32206.263 - 32455.924: 99.7248% ( 5) 00:23:55.626 32455.924 - 32705.585: 99.7740% ( 5) 00:23:55.626 32705.585 - 32955.246: 99.8231% ( 5) 00:23:55.626 32955.246 - 33204.907: 99.8722% ( 5) 00:23:55.626 33204.907 - 33454.568: 99.9214% ( 5) 00:23:55.626 33454.568 - 33704.229: 99.9705% ( 5) 00:23:55.626 33704.229 - 33953.890: 100.0000% ( 3) 00:23:55.626 00:23:55.626 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:23:55.626 ============================================================================== 00:23:55.626 Range in us Cumulative IO count 00:23:55.626 9112.625 - 9175.040: 0.0197% ( 2) 00:23:55.626 9175.040 - 9237.455: 0.0786% ( 6) 00:23:55.626 9237.455 - 9299.870: 0.1376% ( 6) 00:23:55.626 9299.870 - 9362.286: 0.2064% ( 7) 00:23:55.626 9362.286 - 9424.701: 0.2653% ( 6) 00:23:55.626 9424.701 - 9487.116: 0.3538% ( 9) 00:23:55.626 9487.116 - 9549.531: 0.4226% ( 7) 00:23:55.626 9549.531 - 9611.947: 0.5503% ( 13) 00:23:55.626 9611.947 - 9674.362: 0.6584% ( 11) 00:23:55.626 9674.362 - 9736.777: 0.7862% ( 13) 00:23:55.626 9736.777 - 9799.192: 0.9434% ( 16) 00:23:55.626 9799.192 - 9861.608: 1.0810% ( 14) 00:23:55.626 9861.608 - 9924.023: 1.2382% ( 16) 00:23:55.626 9924.023 - 9986.438: 1.4151% ( 18) 00:23:55.626 9986.438 - 10048.853: 1.6116% ( 20) 00:23:55.626 10048.853 - 10111.269: 1.8082% ( 20) 00:23:55.626 10111.269 - 10173.684: 2.0637% ( 26) 00:23:55.626 10173.684 - 10236.099: 2.3290% ( 27) 00:23:55.626 10236.099 - 10298.514: 2.6828% ( 36) 00:23:55.626 10298.514 - 10360.930: 3.1053% ( 43) 00:23:55.626 10360.930 - 10423.345: 3.4886% ( 39) 00:23:55.626 10423.345 - 10485.760: 3.9996% ( 52) 00:23:55.626 10485.760 - 10548.175: 4.4910% ( 50) 00:23:55.626 10548.175 - 10610.590: 5.0216% ( 54) 00:23:55.626 10610.590 - 10673.006: 5.5916% ( 58) 00:23:55.626 10673.006 - 10735.421: 6.3090% ( 73) 00:23:55.626 10735.421 - 10797.836: 7.1246% ( 83) 00:23:55.626 10797.836 - 10860.251: 8.0189% ( 91) 00:23:55.626 10860.251 - 10922.667: 9.0409% ( 104) 00:23:55.626 10922.667 - 10985.082: 10.1022% ( 108) 00:23:55.626 10985.082 - 11047.497: 11.3601% ( 128) 00:23:55.626 11047.497 - 11109.912: 12.9619% ( 163) 00:23:55.626 11109.912 - 11172.328: 14.7602% ( 183) 00:23:55.626 11172.328 - 11234.743: 16.6175% ( 189) 00:23:55.626 11234.743 - 11297.158: 18.6517% ( 207) 00:23:55.626 11297.158 - 11359.573: 20.8039% ( 219) 00:23:55.626 11359.573 - 11421.989: 23.0837% ( 232) 00:23:55.626 11421.989 - 11484.404: 25.5012% ( 246) 00:23:55.626 11484.404 - 11546.819: 27.8892% ( 243) 00:23:55.626 11546.819 - 11609.234: 30.3164% ( 247) 00:23:55.626 11609.234 - 11671.650: 32.8322% ( 256) 00:23:55.626 11671.650 - 11734.065: 35.2594% ( 247) 00:23:55.626 11734.065 - 11796.480: 37.5983% ( 238) 00:23:55.626 11796.480 - 11858.895: 40.0452% ( 249) 00:23:55.626 11858.895 - 11921.310: 42.4135% ( 241) 00:23:55.626 11921.310 - 11983.726: 44.9096% ( 254) 00:23:55.626 11983.726 - 12046.141: 47.2779% ( 241) 00:23:55.626 12046.141 - 12108.556: 49.6069% ( 237) 00:23:55.626 12108.556 - 12170.971: 51.9359% ( 237) 00:23:55.626 12170.971 - 12233.387: 54.0193% ( 212) 00:23:55.626 12233.387 - 12295.802: 55.8569% ( 187) 00:23:55.626 12295.802 - 12358.217: 57.5079% ( 168) 00:23:55.626 12358.217 - 12420.632: 59.1392% ( 166) 00:23:55.626 12420.632 - 12483.048: 60.7410% ( 163) 00:23:55.626 12483.048 - 12545.463: 62.2150% ( 150) 00:23:55.626 12545.463 - 12607.878: 63.7284% ( 154) 00:23:55.626 12607.878 - 12670.293: 65.2024% ( 150) 00:23:55.626 12670.293 - 12732.709: 66.7551% ( 158) 00:23:55.626 12732.709 - 12795.124: 68.3962% ( 167) 00:23:55.626 12795.124 - 12857.539: 70.1160% ( 175) 00:23:55.626 12857.539 - 12919.954: 71.8455% ( 176) 00:23:55.626 12919.954 - 12982.370: 73.4768% ( 166) 00:23:55.626 12982.370 - 13044.785: 75.1474% ( 170) 00:23:55.626 13044.785 - 13107.200: 76.6706% ( 155) 00:23:55.626 13107.200 - 13169.615: 78.1741% ( 153) 00:23:55.626 13169.615 - 13232.030: 79.6384% ( 149) 00:23:55.626 13232.030 - 13294.446: 81.0829% ( 147) 00:23:55.626 13294.446 - 13356.861: 82.4686% ( 141) 00:23:55.626 13356.861 - 13419.276: 83.8542% ( 141) 00:23:55.626 13419.276 - 13481.691: 85.2005% ( 137) 00:23:55.626 13481.691 - 13544.107: 86.4583% ( 128) 00:23:55.626 13544.107 - 13606.522: 87.6278% ( 119) 00:23:55.626 13606.522 - 13668.937: 88.6891% ( 108) 00:23:55.626 13668.937 - 13731.352: 89.6128% ( 94) 00:23:55.626 13731.352 - 13793.768: 90.5464% ( 95) 00:23:55.626 13793.768 - 13856.183: 91.4701% ( 94) 00:23:55.626 13856.183 - 13918.598: 92.3054% ( 85) 00:23:55.626 13918.598 - 13981.013: 92.9933% ( 70) 00:23:55.626 13981.013 - 14043.429: 93.5043% ( 52) 00:23:55.626 14043.429 - 14105.844: 93.8483% ( 35) 00:23:55.626 14105.844 - 14168.259: 94.1529% ( 31) 00:23:55.626 14168.259 - 14230.674: 94.3888% ( 24) 00:23:55.626 14230.674 - 14293.090: 94.6050% ( 22) 00:23:55.626 14293.090 - 14355.505: 94.7917% ( 19) 00:23:55.626 14355.505 - 14417.920: 94.9686% ( 18) 00:23:55.626 14417.920 - 14480.335: 95.0570% ( 9) 00:23:55.626 14480.335 - 14542.750: 95.1454% ( 9) 00:23:55.626 14542.750 - 14605.166: 95.2241% ( 8) 00:23:55.626 14605.166 - 14667.581: 95.3125% ( 9) 00:23:55.626 14667.581 - 14729.996: 95.4009% ( 9) 00:23:55.626 14729.996 - 14792.411: 95.4992% ( 10) 00:23:55.626 14792.411 - 14854.827: 95.6368% ( 14) 00:23:55.626 14854.827 - 14917.242: 95.7351% ( 10) 00:23:55.626 14917.242 - 14979.657: 95.8825% ( 15) 00:23:55.626 14979.657 - 15042.072: 96.0004% ( 12) 00:23:55.626 15042.072 - 15104.488: 96.1380% ( 14) 00:23:55.626 15104.488 - 15166.903: 96.2657% ( 13) 00:23:55.626 15166.903 - 15229.318: 96.4230% ( 16) 00:23:55.626 15229.318 - 15291.733: 96.5507% ( 13) 00:23:55.626 15291.733 - 15354.149: 96.6686% ( 12) 00:23:55.626 15354.149 - 15416.564: 96.7571% ( 9) 00:23:55.626 15416.564 - 15478.979: 96.8259% ( 7) 00:23:55.626 15478.979 - 15541.394: 96.9241% ( 10) 00:23:55.626 15541.394 - 15603.810: 97.0126% ( 9) 00:23:55.626 15603.810 - 15666.225: 97.1010% ( 9) 00:23:55.626 15666.225 - 15728.640: 97.1895% ( 9) 00:23:55.626 15728.640 - 15791.055: 97.2681% ( 8) 00:23:55.626 15791.055 - 15853.470: 97.3172% ( 5) 00:23:55.626 15853.470 - 15915.886: 97.3664% ( 5) 00:23:55.626 15915.886 - 15978.301: 97.4155% ( 5) 00:23:55.626 15978.301 - 16103.131: 97.4843% ( 7) 00:23:55.626 16352.792 - 16477.623: 97.4941% ( 1) 00:23:55.627 16477.623 - 16602.453: 97.5432% ( 5) 00:23:55.627 16602.453 - 16727.284: 97.6120% ( 7) 00:23:55.627 16727.284 - 16852.114: 97.6513% ( 4) 00:23:55.627 16852.114 - 16976.945: 97.7496% ( 10) 00:23:55.627 16976.945 - 17101.775: 97.8675% ( 12) 00:23:55.627 17101.775 - 17226.606: 97.9756% ( 11) 00:23:55.627 17226.606 - 17351.436: 98.0837% ( 11) 00:23:55.627 17351.436 - 17476.267: 98.2017% ( 12) 00:23:55.627 17476.267 - 17601.097: 98.3196% ( 12) 00:23:55.627 17601.097 - 17725.928: 98.4080% ( 9) 00:23:55.627 17725.928 - 17850.758: 98.5161% ( 11) 00:23:55.627 17850.758 - 17975.589: 98.5947% ( 8) 00:23:55.627 17975.589 - 18100.419: 98.6439% ( 5) 00:23:55.627 18100.419 - 18225.250: 98.6930% ( 5) 00:23:55.627 18225.250 - 18350.080: 98.7421% ( 5) 00:23:55.627 18350.080 - 18474.910: 98.7618% ( 2) 00:23:55.627 18474.910 - 18599.741: 98.7913% ( 3) 00:23:55.627 18599.741 - 18724.571: 98.8109% ( 2) 00:23:55.627 18724.571 - 18849.402: 98.8404% ( 3) 00:23:55.627 18849.402 - 18974.232: 98.8699% ( 3) 00:23:55.627 18974.232 - 19099.063: 98.8895% ( 2) 00:23:55.627 19099.063 - 19223.893: 98.9190% ( 3) 00:23:55.627 19223.893 - 19348.724: 98.9485% ( 3) 00:23:55.627 19348.724 - 19473.554: 98.9682% ( 2) 00:23:55.627 19473.554 - 19598.385: 98.9878% ( 2) 00:23:55.627 19598.385 - 19723.215: 99.0173% ( 3) 00:23:55.627 19723.215 - 19848.046: 99.0369% ( 2) 00:23:55.627 19848.046 - 19972.876: 99.0664% ( 3) 00:23:55.627 19972.876 - 20097.707: 99.0959% ( 3) 00:23:55.627 20097.707 - 20222.537: 99.1156% ( 2) 00:23:55.627 20222.537 - 20347.368: 99.1450% ( 3) 00:23:55.627 20347.368 - 20472.198: 99.1745% ( 3) 00:23:55.627 20472.198 - 20597.029: 99.1942% ( 2) 00:23:55.627 20597.029 - 20721.859: 99.2237% ( 3) 00:23:55.627 20721.859 - 20846.690: 99.2433% ( 2) 00:23:55.627 20846.690 - 20971.520: 99.2630% ( 2) 00:23:55.627 20971.520 - 21096.350: 99.2925% ( 3) 00:23:55.627 21096.350 - 21221.181: 99.3219% ( 3) 00:23:55.627 21221.181 - 21346.011: 99.3416% ( 2) 00:23:55.627 21346.011 - 21470.842: 99.3711% ( 3) 00:23:55.627 26713.722 - 26838.552: 99.3907% ( 2) 00:23:55.627 26838.552 - 26963.383: 99.4202% ( 3) 00:23:55.627 26963.383 - 27088.213: 99.4399% ( 2) 00:23:55.627 27088.213 - 27213.044: 99.4693% ( 3) 00:23:55.627 27213.044 - 27337.874: 99.4988% ( 3) 00:23:55.627 27337.874 - 27462.705: 99.5185% ( 2) 00:23:55.627 27462.705 - 27587.535: 99.5480% ( 3) 00:23:55.627 27587.535 - 27712.366: 99.5676% ( 2) 00:23:55.627 27712.366 - 27837.196: 99.5971% ( 3) 00:23:55.627 27837.196 - 27962.027: 99.6266% ( 3) 00:23:55.627 27962.027 - 28086.857: 99.6561% ( 3) 00:23:55.627 28086.857 - 28211.688: 99.6757% ( 2) 00:23:55.627 28211.688 - 28336.518: 99.7052% ( 3) 00:23:55.627 28336.518 - 28461.349: 99.7248% ( 2) 00:23:55.627 28461.349 - 28586.179: 99.7543% ( 3) 00:23:55.627 28586.179 - 28711.010: 99.7740% ( 2) 00:23:55.627 28711.010 - 28835.840: 99.8035% ( 3) 00:23:55.627 28835.840 - 28960.670: 99.8231% ( 2) 00:23:55.627 28960.670 - 29085.501: 99.8526% ( 3) 00:23:55.627 29085.501 - 29210.331: 99.8821% ( 3) 00:23:55.627 29210.331 - 29335.162: 99.9017% ( 2) 00:23:55.627 29335.162 - 29459.992: 99.9312% ( 3) 00:23:55.627 29459.992 - 29584.823: 99.9607% ( 3) 00:23:55.627 29584.823 - 29709.653: 99.9902% ( 3) 00:23:55.627 29709.653 - 29834.484: 100.0000% ( 1) 00:23:55.627 00:23:55.627 07:21:19 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:23:57.006 Initializing NVMe Controllers 00:23:57.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:57.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:57.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:57.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:57.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:57.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:23:57.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:23:57.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:23:57.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:23:57.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:23:57.006 Initialization complete. Launching workers. 00:23:57.006 ======================================================== 00:23:57.006 Latency(us) 00:23:57.006 Device Information : IOPS MiB/s Average min max 00:23:57.006 PCIE (0000:00:10.0) NSID 1 from core 0: 7938.76 93.03 16209.30 10760.15 45538.32 00:23:57.006 PCIE (0000:00:11.0) NSID 1 from core 0: 7938.76 93.03 16189.04 11057.76 43237.58 00:23:57.006 PCIE (0000:00:13.0) NSID 1 from core 0: 7938.76 93.03 16166.01 11066.86 44609.84 00:23:57.006 PCIE (0000:00:12.0) NSID 1 from core 0: 7938.76 93.03 16139.28 11141.52 42647.92 00:23:57.006 PCIE (0000:00:12.0) NSID 2 from core 0: 7938.76 93.03 16114.47 11058.81 40991.52 00:23:57.006 PCIE (0000:00:12.0) NSID 3 from core 0: 7938.76 93.03 16090.59 10949.60 38678.64 00:23:57.006 ======================================================== 00:23:57.007 Total : 47632.56 558.19 16151.45 10760.15 45538.32 00:23:57.007 00:23:57.007 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:23:57.007 ================================================================================= 00:23:57.007 1.00000% : 11109.912us 00:23:57.007 10.00000% : 12170.971us 00:23:57.007 25.00000% : 13107.200us 00:23:57.007 50.00000% : 14230.674us 00:23:57.007 75.00000% : 17850.758us 00:23:57.007 90.00000% : 23592.960us 00:23:57.007 95.00000% : 24716.434us 00:23:57.007 98.00000% : 25590.248us 00:23:57.007 99.00000% : 35451.855us 00:23:57.007 99.50000% : 44189.989us 00:23:57.007 99.90000% : 45438.293us 00:23:57.007 99.99000% : 45687.954us 00:23:57.007 99.99900% : 45687.954us 00:23:57.007 99.99990% : 45687.954us 00:23:57.007 99.99999% : 45687.954us 00:23:57.007 00:23:57.007 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:23:57.007 ================================================================================= 00:23:57.007 1.00000% : 11484.404us 00:23:57.007 10.00000% : 12233.387us 00:23:57.007 25.00000% : 13044.785us 00:23:57.007 50.00000% : 14105.844us 00:23:57.007 75.00000% : 18225.250us 00:23:57.007 90.00000% : 23343.299us 00:23:57.007 95.00000% : 24217.112us 00:23:57.007 98.00000% : 25215.756us 00:23:57.007 99.00000% : 34702.872us 00:23:57.007 99.50000% : 41943.040us 00:23:57.007 99.90000% : 42941.684us 00:23:57.007 99.99000% : 43441.006us 00:23:57.007 99.99900% : 43441.006us 00:23:57.007 99.99990% : 43441.006us 00:23:57.007 99.99999% : 43441.006us 00:23:57.007 00:23:57.007 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:23:57.007 ================================================================================= 00:23:57.007 1.00000% : 11484.404us 00:23:57.007 10.00000% : 12358.217us 00:23:57.007 25.00000% : 13044.785us 00:23:57.007 50.00000% : 14168.259us 00:23:57.007 75.00000% : 18100.419us 00:23:57.007 90.00000% : 23343.299us 00:23:57.007 95.00000% : 24217.112us 00:23:57.007 98.00000% : 25090.926us 00:23:57.007 99.00000% : 33704.229us 00:23:57.007 99.50000% : 43441.006us 00:23:57.007 99.90000% : 44439.650us 00:23:57.007 99.99000% : 44689.310us 00:23:57.007 99.99900% : 44689.310us 00:23:57.007 99.99990% : 44689.310us 00:23:57.007 99.99999% : 44689.310us 00:23:57.007 00:23:57.007 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:23:57.007 ================================================================================= 00:23:57.007 1.00000% : 11484.404us 00:23:57.007 10.00000% : 12358.217us 00:23:57.007 25.00000% : 13044.785us 00:23:57.007 50.00000% : 14168.259us 00:23:57.007 75.00000% : 17725.928us 00:23:57.007 90.00000% : 23218.469us 00:23:57.007 95.00000% : 24092.282us 00:23:57.007 98.00000% : 24966.095us 00:23:57.007 99.00000% : 31457.280us 00:23:57.007 99.50000% : 41443.718us 00:23:57.007 99.90000% : 42442.362us 00:23:57.007 99.99000% : 42692.023us 00:23:57.007 99.99900% : 42692.023us 00:23:57.007 99.99990% : 42692.023us 00:23:57.007 99.99999% : 42692.023us 00:23:57.007 00:23:57.007 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:23:57.007 ================================================================================= 00:23:57.007 1.00000% : 11421.989us 00:23:57.007 10.00000% : 12358.217us 00:23:57.007 25.00000% : 13044.785us 00:23:57.007 50.00000% : 14230.674us 00:23:57.007 75.00000% : 17601.097us 00:23:57.007 90.00000% : 23218.469us 00:23:57.007 95.00000% : 24092.282us 00:23:57.007 98.00000% : 25090.926us 00:23:57.007 99.00000% : 28960.670us 00:23:57.007 99.50000% : 39945.752us 00:23:57.007 99.90000% : 40944.396us 00:23:57.007 99.99000% : 41194.057us 00:23:57.007 99.99900% : 41194.057us 00:23:57.007 99.99990% : 41194.057us 00:23:57.007 99.99999% : 41194.057us 00:23:57.007 00:23:57.007 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:23:57.007 ================================================================================= 00:23:57.007 1.00000% : 11421.989us 00:23:57.007 10.00000% : 12170.971us 00:23:57.007 25.00000% : 13107.200us 00:23:57.007 50.00000% : 14168.259us 00:23:57.007 75.00000% : 17601.097us 00:23:57.007 90.00000% : 23218.469us 00:23:57.007 95.00000% : 24217.112us 00:23:57.007 98.00000% : 25090.926us 00:23:57.007 99.00000% : 26339.230us 00:23:57.007 99.50000% : 37449.143us 00:23:57.007 99.90000% : 38447.787us 00:23:57.007 99.99000% : 38697.448us 00:23:57.007 99.99900% : 38697.448us 00:23:57.007 99.99990% : 38697.448us 00:23:57.007 99.99999% : 38697.448us 00:23:57.007 00:23:57.007 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:23:57.007 ============================================================================== 00:23:57.007 Range in us Cumulative IO count 00:23:57.007 10735.421 - 10797.836: 0.1500% ( 12) 00:23:57.007 10797.836 - 10860.251: 0.1875% ( 3) 00:23:57.007 10860.251 - 10922.667: 0.3250% ( 11) 00:23:57.007 10922.667 - 10985.082: 0.4875% ( 13) 00:23:57.007 10985.082 - 11047.497: 0.8875% ( 32) 00:23:57.007 11047.497 - 11109.912: 1.2750% ( 31) 00:23:57.007 11109.912 - 11172.328: 1.4500% ( 14) 00:23:57.007 11172.328 - 11234.743: 1.6250% ( 14) 00:23:57.007 11234.743 - 11297.158: 1.9375% ( 25) 00:23:57.007 11297.158 - 11359.573: 2.2250% ( 23) 00:23:57.007 11359.573 - 11421.989: 2.4625% ( 19) 00:23:57.007 11421.989 - 11484.404: 2.9750% ( 41) 00:23:57.007 11484.404 - 11546.819: 3.3000% ( 26) 00:23:57.007 11546.819 - 11609.234: 3.7125% ( 33) 00:23:57.007 11609.234 - 11671.650: 4.4125% ( 56) 00:23:57.007 11671.650 - 11734.065: 4.9625% ( 44) 00:23:57.007 11734.065 - 11796.480: 5.6250% ( 53) 00:23:57.007 11796.480 - 11858.895: 6.2250% ( 48) 00:23:57.007 11858.895 - 11921.310: 7.1625% ( 75) 00:23:57.007 11921.310 - 11983.726: 7.8500% ( 55) 00:23:57.007 11983.726 - 12046.141: 8.6375% ( 63) 00:23:57.007 12046.141 - 12108.556: 9.3375% ( 56) 00:23:57.007 12108.556 - 12170.971: 10.2750% ( 75) 00:23:57.007 12170.971 - 12233.387: 11.2625% ( 79) 00:23:57.007 12233.387 - 12295.802: 12.1000% ( 67) 00:23:57.007 12295.802 - 12358.217: 12.8750% ( 62) 00:23:57.007 12358.217 - 12420.632: 13.9750% ( 88) 00:23:57.007 12420.632 - 12483.048: 14.9125% ( 75) 00:23:57.007 12483.048 - 12545.463: 15.8625% ( 76) 00:23:57.007 12545.463 - 12607.878: 17.0250% ( 93) 00:23:57.007 12607.878 - 12670.293: 18.3125% ( 103) 00:23:57.007 12670.293 - 12732.709: 19.6000% ( 103) 00:23:57.007 12732.709 - 12795.124: 20.6625% ( 85) 00:23:57.007 12795.124 - 12857.539: 21.7250% ( 85) 00:23:57.007 12857.539 - 12919.954: 22.8125% ( 87) 00:23:57.007 12919.954 - 12982.370: 23.8250% ( 81) 00:23:57.007 12982.370 - 13044.785: 24.9000% ( 86) 00:23:57.007 13044.785 - 13107.200: 26.1375% ( 99) 00:23:57.007 13107.200 - 13169.615: 27.5500% ( 113) 00:23:57.007 13169.615 - 13232.030: 29.1000% ( 124) 00:23:57.007 13232.030 - 13294.446: 30.3750% ( 102) 00:23:57.007 13294.446 - 13356.861: 31.8625% ( 119) 00:23:57.007 13356.861 - 13419.276: 33.2500% ( 111) 00:23:57.007 13419.276 - 13481.691: 34.5250% ( 102) 00:23:57.007 13481.691 - 13544.107: 35.8750% ( 108) 00:23:57.007 13544.107 - 13606.522: 37.4125% ( 123) 00:23:57.007 13606.522 - 13668.937: 38.7375% ( 106) 00:23:57.007 13668.937 - 13731.352: 40.1750% ( 115) 00:23:57.007 13731.352 - 13793.768: 41.4500% ( 102) 00:23:57.007 13793.768 - 13856.183: 42.6625% ( 97) 00:23:57.007 13856.183 - 13918.598: 44.0375% ( 110) 00:23:57.007 13918.598 - 13981.013: 45.3875% ( 108) 00:23:57.007 13981.013 - 14043.429: 46.6875% ( 104) 00:23:57.007 14043.429 - 14105.844: 47.9375% ( 100) 00:23:57.007 14105.844 - 14168.259: 49.0125% ( 86) 00:23:57.007 14168.259 - 14230.674: 50.1500% ( 91) 00:23:57.007 14230.674 - 14293.090: 51.2875% ( 91) 00:23:57.007 14293.090 - 14355.505: 52.4625% ( 94) 00:23:57.007 14355.505 - 14417.920: 53.5250% ( 85) 00:23:57.007 14417.920 - 14480.335: 54.4375% ( 73) 00:23:57.007 14480.335 - 14542.750: 55.2500% ( 65) 00:23:57.007 14542.750 - 14605.166: 56.2625% ( 81) 00:23:57.007 14605.166 - 14667.581: 57.1000% ( 67) 00:23:57.007 14667.581 - 14729.996: 57.9375% ( 67) 00:23:57.007 14729.996 - 14792.411: 58.6250% ( 55) 00:23:57.007 14792.411 - 14854.827: 59.3500% ( 58) 00:23:57.007 14854.827 - 14917.242: 59.9875% ( 51) 00:23:57.007 14917.242 - 14979.657: 60.4500% ( 37) 00:23:57.007 14979.657 - 15042.072: 60.9500% ( 40) 00:23:57.007 15042.072 - 15104.488: 61.5125% ( 45) 00:23:57.007 15104.488 - 15166.903: 62.1750% ( 53) 00:23:57.007 15166.903 - 15229.318: 62.8000% ( 50) 00:23:57.007 15229.318 - 15291.733: 63.4000% ( 48) 00:23:57.007 15291.733 - 15354.149: 63.8375% ( 35) 00:23:57.007 15354.149 - 15416.564: 64.4375% ( 48) 00:23:57.007 15416.564 - 15478.979: 64.9000% ( 37) 00:23:57.007 15478.979 - 15541.394: 65.2250% ( 26) 00:23:57.007 15541.394 - 15603.810: 65.5625% ( 27) 00:23:57.007 15603.810 - 15666.225: 65.9000% ( 27) 00:23:57.007 15666.225 - 15728.640: 66.2500% ( 28) 00:23:57.007 15728.640 - 15791.055: 66.5750% ( 26) 00:23:57.007 15791.055 - 15853.470: 66.8250% ( 20) 00:23:57.007 15853.470 - 15915.886: 67.0875% ( 21) 00:23:57.007 15915.886 - 15978.301: 67.3375% ( 20) 00:23:57.007 15978.301 - 16103.131: 67.7875% ( 36) 00:23:57.007 16103.131 - 16227.962: 68.2875% ( 40) 00:23:57.007 16227.962 - 16352.792: 68.7750% ( 39) 00:23:57.007 16352.792 - 16477.623: 69.3625% ( 47) 00:23:57.007 16477.623 - 16602.453: 70.0000% ( 51) 00:23:57.007 16602.453 - 16727.284: 70.4500% ( 36) 00:23:57.007 16727.284 - 16852.114: 70.7875% ( 27) 00:23:57.008 16852.114 - 16976.945: 71.3375% ( 44) 00:23:57.008 16976.945 - 17101.775: 71.9125% ( 46) 00:23:57.008 17101.775 - 17226.606: 72.4875% ( 46) 00:23:57.008 17226.606 - 17351.436: 73.0625% ( 46) 00:23:57.008 17351.436 - 17476.267: 73.6000% ( 43) 00:23:57.008 17476.267 - 17601.097: 74.1750% ( 46) 00:23:57.008 17601.097 - 17725.928: 74.7250% ( 44) 00:23:57.008 17725.928 - 17850.758: 75.3250% ( 48) 00:23:57.008 17850.758 - 17975.589: 75.7625% ( 35) 00:23:57.008 17975.589 - 18100.419: 76.1500% ( 31) 00:23:57.008 18100.419 - 18225.250: 76.6000% ( 36) 00:23:57.008 18225.250 - 18350.080: 76.9125% ( 25) 00:23:57.008 18350.080 - 18474.910: 77.2125% ( 24) 00:23:57.008 18474.910 - 18599.741: 77.5500% ( 27) 00:23:57.008 18599.741 - 18724.571: 77.8250% ( 22) 00:23:57.008 18724.571 - 18849.402: 78.1250% ( 24) 00:23:57.008 18849.402 - 18974.232: 78.3125% ( 15) 00:23:57.008 18974.232 - 19099.063: 78.5375% ( 18) 00:23:57.008 19099.063 - 19223.893: 78.6750% ( 11) 00:23:57.008 19223.893 - 19348.724: 78.8375% ( 13) 00:23:57.008 19348.724 - 19473.554: 78.9875% ( 12) 00:23:57.008 19473.554 - 19598.385: 79.1125% ( 10) 00:23:57.008 19598.385 - 19723.215: 79.2250% ( 9) 00:23:57.008 19723.215 - 19848.046: 79.2875% ( 5) 00:23:57.008 19848.046 - 19972.876: 79.3250% ( 3) 00:23:57.008 19972.876 - 20097.707: 79.3625% ( 3) 00:23:57.008 20097.707 - 20222.537: 79.3875% ( 2) 00:23:57.008 20222.537 - 20347.368: 79.4000% ( 1) 00:23:57.008 20347.368 - 20472.198: 79.4375% ( 3) 00:23:57.008 20472.198 - 20597.029: 79.4750% ( 3) 00:23:57.008 20597.029 - 20721.859: 79.5000% ( 2) 00:23:57.008 20721.859 - 20846.690: 79.7125% ( 17) 00:23:57.008 20971.520 - 21096.350: 79.7375% ( 2) 00:23:57.008 21096.350 - 21221.181: 79.8500% ( 9) 00:23:57.008 21221.181 - 21346.011: 79.9250% ( 6) 00:23:57.008 21346.011 - 21470.842: 80.2625% ( 27) 00:23:57.008 21470.842 - 21595.672: 80.9125% ( 52) 00:23:57.008 21595.672 - 21720.503: 81.6750% ( 61) 00:23:57.008 21720.503 - 21845.333: 82.3625% ( 55) 00:23:57.008 21845.333 - 21970.164: 82.9125% ( 44) 00:23:57.008 21970.164 - 22094.994: 83.5750% ( 53) 00:23:57.008 22094.994 - 22219.825: 84.3000% ( 58) 00:23:57.008 22219.825 - 22344.655: 85.0750% ( 62) 00:23:57.008 22344.655 - 22469.486: 85.6375% ( 45) 00:23:57.008 22469.486 - 22594.316: 86.2250% ( 47) 00:23:57.008 22594.316 - 22719.147: 86.7250% ( 40) 00:23:57.008 22719.147 - 22843.977: 87.2500% ( 42) 00:23:57.008 22843.977 - 22968.808: 88.0000% ( 60) 00:23:57.008 22968.808 - 23093.638: 88.5625% ( 45) 00:23:57.008 23093.638 - 23218.469: 89.0250% ( 37) 00:23:57.008 23218.469 - 23343.299: 89.3625% ( 27) 00:23:57.008 23343.299 - 23468.130: 89.8500% ( 39) 00:23:57.008 23468.130 - 23592.960: 90.3625% ( 41) 00:23:57.008 23592.960 - 23717.790: 90.9500% ( 47) 00:23:57.008 23717.790 - 23842.621: 91.5000% ( 44) 00:23:57.008 23842.621 - 23967.451: 92.0750% ( 46) 00:23:57.008 23967.451 - 24092.282: 92.6125% ( 43) 00:23:57.008 24092.282 - 24217.112: 93.1750% ( 45) 00:23:57.008 24217.112 - 24341.943: 93.7000% ( 42) 00:23:57.008 24341.943 - 24466.773: 94.2500% ( 44) 00:23:57.008 24466.773 - 24591.604: 94.8500% ( 48) 00:23:57.008 24591.604 - 24716.434: 95.3750% ( 42) 00:23:57.008 24716.434 - 24841.265: 95.8500% ( 38) 00:23:57.008 24841.265 - 24966.095: 96.3750% ( 42) 00:23:57.008 24966.095 - 25090.926: 96.8625% ( 39) 00:23:57.008 25090.926 - 25215.756: 97.3000% ( 35) 00:23:57.008 25215.756 - 25340.587: 97.6125% ( 25) 00:23:57.008 25340.587 - 25465.417: 97.8625% ( 20) 00:23:57.008 25465.417 - 25590.248: 98.0625% ( 16) 00:23:57.008 25590.248 - 25715.078: 98.2000% ( 11) 00:23:57.008 25715.078 - 25839.909: 98.3125% ( 9) 00:23:57.008 25839.909 - 25964.739: 98.3375% ( 2) 00:23:57.008 25964.739 - 26089.570: 98.3750% ( 3) 00:23:57.008 26089.570 - 26214.400: 98.4000% ( 2) 00:23:57.008 33204.907 - 33454.568: 98.4375% ( 3) 00:23:57.008 33454.568 - 33704.229: 98.5250% ( 7) 00:23:57.008 33704.229 - 33953.890: 98.6000% ( 6) 00:23:57.008 33953.890 - 34203.550: 98.6875% ( 7) 00:23:57.008 34203.550 - 34453.211: 98.7500% ( 5) 00:23:57.008 34453.211 - 34702.872: 98.8375% ( 7) 00:23:57.008 34702.872 - 34952.533: 98.9250% ( 7) 00:23:57.008 34952.533 - 35202.194: 98.9875% ( 5) 00:23:57.008 35202.194 - 35451.855: 99.0750% ( 7) 00:23:57.008 35451.855 - 35701.516: 99.1625% ( 7) 00:23:57.008 35701.516 - 35951.177: 99.2000% ( 3) 00:23:57.008 42941.684 - 43191.345: 99.2625% ( 5) 00:23:57.008 43191.345 - 43441.006: 99.3500% ( 7) 00:23:57.008 43441.006 - 43690.667: 99.4250% ( 6) 00:23:57.008 43690.667 - 43940.328: 99.4875% ( 5) 00:23:57.008 43940.328 - 44189.989: 99.5750% ( 7) 00:23:57.008 44189.989 - 44439.650: 99.6500% ( 6) 00:23:57.008 44439.650 - 44689.310: 99.7375% ( 7) 00:23:57.008 44689.310 - 44938.971: 99.8125% ( 6) 00:23:57.008 44938.971 - 45188.632: 99.8875% ( 6) 00:23:57.008 45188.632 - 45438.293: 99.9750% ( 7) 00:23:57.008 45438.293 - 45687.954: 100.0000% ( 2) 00:23:57.008 00:23:57.008 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:23:57.008 ============================================================================== 00:23:57.008 Range in us Cumulative IO count 00:23:57.008 11047.497 - 11109.912: 0.0125% ( 1) 00:23:57.008 11172.328 - 11234.743: 0.1125% ( 8) 00:23:57.008 11234.743 - 11297.158: 0.1875% ( 6) 00:23:57.008 11297.158 - 11359.573: 0.4500% ( 21) 00:23:57.008 11359.573 - 11421.989: 0.8250% ( 30) 00:23:57.008 11421.989 - 11484.404: 1.2250% ( 32) 00:23:57.008 11484.404 - 11546.819: 1.6125% ( 31) 00:23:57.008 11546.819 - 11609.234: 2.2375% ( 50) 00:23:57.008 11609.234 - 11671.650: 2.9250% ( 55) 00:23:57.008 11671.650 - 11734.065: 3.4625% ( 43) 00:23:57.008 11734.065 - 11796.480: 3.9875% ( 42) 00:23:57.008 11796.480 - 11858.895: 4.5500% ( 45) 00:23:57.008 11858.895 - 11921.310: 5.1750% ( 50) 00:23:57.008 11921.310 - 11983.726: 5.9625% ( 63) 00:23:57.008 11983.726 - 12046.141: 6.9625% ( 80) 00:23:57.008 12046.141 - 12108.556: 7.9125% ( 76) 00:23:57.008 12108.556 - 12170.971: 9.0250% ( 89) 00:23:57.008 12170.971 - 12233.387: 10.2000% ( 94) 00:23:57.008 12233.387 - 12295.802: 11.1625% ( 77) 00:23:57.008 12295.802 - 12358.217: 12.0750% ( 73) 00:23:57.008 12358.217 - 12420.632: 12.9875% ( 73) 00:23:57.008 12420.632 - 12483.048: 14.3250% ( 107) 00:23:57.008 12483.048 - 12545.463: 15.8000% ( 118) 00:23:57.008 12545.463 - 12607.878: 17.2375% ( 115) 00:23:57.008 12607.878 - 12670.293: 18.4500% ( 97) 00:23:57.008 12670.293 - 12732.709: 20.0500% ( 128) 00:23:57.008 12732.709 - 12795.124: 21.3500% ( 104) 00:23:57.008 12795.124 - 12857.539: 22.4750% ( 90) 00:23:57.008 12857.539 - 12919.954: 23.6000% ( 90) 00:23:57.008 12919.954 - 12982.370: 24.6750% ( 86) 00:23:57.008 12982.370 - 13044.785: 25.6250% ( 76) 00:23:57.008 13044.785 - 13107.200: 26.6625% ( 83) 00:23:57.008 13107.200 - 13169.615: 27.9875% ( 106) 00:23:57.008 13169.615 - 13232.030: 29.2875% ( 104) 00:23:57.008 13232.030 - 13294.446: 30.3875% ( 88) 00:23:57.008 13294.446 - 13356.861: 31.6125% ( 98) 00:23:57.008 13356.861 - 13419.276: 33.0250% ( 113) 00:23:57.008 13419.276 - 13481.691: 34.4875% ( 117) 00:23:57.008 13481.691 - 13544.107: 36.0375% ( 124) 00:23:57.008 13544.107 - 13606.522: 37.6875% ( 132) 00:23:57.008 13606.522 - 13668.937: 39.1625% ( 118) 00:23:57.008 13668.937 - 13731.352: 40.9875% ( 146) 00:23:57.008 13731.352 - 13793.768: 42.6875% ( 136) 00:23:57.008 13793.768 - 13856.183: 44.2875% ( 128) 00:23:57.008 13856.183 - 13918.598: 45.8875% ( 128) 00:23:57.008 13918.598 - 13981.013: 47.4750% ( 127) 00:23:57.008 13981.013 - 14043.429: 49.0250% ( 124) 00:23:57.008 14043.429 - 14105.844: 50.4000% ( 110) 00:23:57.008 14105.844 - 14168.259: 51.6750% ( 102) 00:23:57.008 14168.259 - 14230.674: 52.7875% ( 89) 00:23:57.008 14230.674 - 14293.090: 53.7000% ( 73) 00:23:57.008 14293.090 - 14355.505: 54.6750% ( 78) 00:23:57.008 14355.505 - 14417.920: 55.5625% ( 71) 00:23:57.008 14417.920 - 14480.335: 56.5125% ( 76) 00:23:57.008 14480.335 - 14542.750: 57.3750% ( 69) 00:23:57.008 14542.750 - 14605.166: 58.2500% ( 70) 00:23:57.008 14605.166 - 14667.581: 59.0750% ( 66) 00:23:57.008 14667.581 - 14729.996: 59.7625% ( 55) 00:23:57.008 14729.996 - 14792.411: 60.5250% ( 61) 00:23:57.008 14792.411 - 14854.827: 61.2000% ( 54) 00:23:57.008 14854.827 - 14917.242: 61.7625% ( 45) 00:23:57.008 14917.242 - 14979.657: 62.2500% ( 39) 00:23:57.008 14979.657 - 15042.072: 62.7625% ( 41) 00:23:57.008 15042.072 - 15104.488: 63.2000% ( 35) 00:23:57.008 15104.488 - 15166.903: 63.5750% ( 30) 00:23:57.008 15166.903 - 15229.318: 64.0125% ( 35) 00:23:57.008 15229.318 - 15291.733: 64.5875% ( 46) 00:23:57.008 15291.733 - 15354.149: 65.0500% ( 37) 00:23:57.008 15354.149 - 15416.564: 65.6375% ( 47) 00:23:57.008 15416.564 - 15478.979: 66.1375% ( 40) 00:23:57.008 15478.979 - 15541.394: 66.6000% ( 37) 00:23:57.008 15541.394 - 15603.810: 67.0250% ( 34) 00:23:57.008 15603.810 - 15666.225: 67.3375% ( 25) 00:23:57.008 15666.225 - 15728.640: 67.6750% ( 27) 00:23:57.008 15728.640 - 15791.055: 68.0000% ( 26) 00:23:57.008 15791.055 - 15853.470: 68.2125% ( 17) 00:23:57.008 15853.470 - 15915.886: 68.4375% ( 18) 00:23:57.008 15915.886 - 15978.301: 68.6125% ( 14) 00:23:57.008 15978.301 - 16103.131: 68.8875% ( 22) 00:23:57.008 16103.131 - 16227.962: 69.1750% ( 23) 00:23:57.008 16227.962 - 16352.792: 69.4500% ( 22) 00:23:57.008 16352.792 - 16477.623: 69.9750% ( 42) 00:23:57.008 16477.623 - 16602.453: 70.4125% ( 35) 00:23:57.009 16602.453 - 16727.284: 70.5375% ( 10) 00:23:57.009 16727.284 - 16852.114: 70.8500% ( 25) 00:23:57.009 16852.114 - 16976.945: 71.4375% ( 47) 00:23:57.009 16976.945 - 17101.775: 72.2625% ( 66) 00:23:57.009 17101.775 - 17226.606: 72.8125% ( 44) 00:23:57.009 17226.606 - 17351.436: 73.2625% ( 36) 00:23:57.009 17351.436 - 17476.267: 73.5375% ( 22) 00:23:57.009 17476.267 - 17601.097: 73.7750% ( 19) 00:23:57.009 17601.097 - 17725.928: 74.1000% ( 26) 00:23:57.009 17725.928 - 17850.758: 74.5250% ( 34) 00:23:57.009 17850.758 - 17975.589: 74.8000% ( 22) 00:23:57.009 17975.589 - 18100.419: 74.9375% ( 11) 00:23:57.009 18100.419 - 18225.250: 75.0750% ( 11) 00:23:57.009 18225.250 - 18350.080: 75.2875% ( 17) 00:23:57.009 18350.080 - 18474.910: 75.5250% ( 19) 00:23:57.009 18474.910 - 18599.741: 75.7500% ( 18) 00:23:57.009 18599.741 - 18724.571: 76.0500% ( 24) 00:23:57.009 18724.571 - 18849.402: 76.3250% ( 22) 00:23:57.009 18849.402 - 18974.232: 76.6250% ( 24) 00:23:57.009 18974.232 - 19099.063: 76.9000% ( 22) 00:23:57.009 19099.063 - 19223.893: 77.1625% ( 21) 00:23:57.009 19223.893 - 19348.724: 77.3750% ( 17) 00:23:57.009 19348.724 - 19473.554: 77.6750% ( 24) 00:23:57.009 19473.554 - 19598.385: 77.9875% ( 25) 00:23:57.009 19598.385 - 19723.215: 78.3250% ( 27) 00:23:57.009 19723.215 - 19848.046: 78.6250% ( 24) 00:23:57.009 19848.046 - 19972.876: 78.9500% ( 26) 00:23:57.009 19972.876 - 20097.707: 79.1625% ( 17) 00:23:57.009 20097.707 - 20222.537: 79.3750% ( 17) 00:23:57.009 20222.537 - 20347.368: 79.5000% ( 10) 00:23:57.009 20347.368 - 20472.198: 79.6250% ( 10) 00:23:57.009 20472.198 - 20597.029: 79.7375% ( 9) 00:23:57.009 20597.029 - 20721.859: 79.8625% ( 10) 00:23:57.009 20721.859 - 20846.690: 79.9750% ( 9) 00:23:57.009 20846.690 - 20971.520: 80.0000% ( 2) 00:23:57.009 21346.011 - 21470.842: 80.0125% ( 1) 00:23:57.009 21470.842 - 21595.672: 80.0500% ( 3) 00:23:57.009 21595.672 - 21720.503: 80.1250% ( 6) 00:23:57.009 21720.503 - 21845.333: 80.1625% ( 3) 00:23:57.009 21845.333 - 21970.164: 80.2875% ( 10) 00:23:57.009 21970.164 - 22094.994: 80.6000% ( 25) 00:23:57.009 22094.994 - 22219.825: 81.1875% ( 47) 00:23:57.009 22219.825 - 22344.655: 81.9375% ( 60) 00:23:57.009 22344.655 - 22469.486: 82.6750% ( 59) 00:23:57.009 22469.486 - 22594.316: 83.6375% ( 77) 00:23:57.009 22594.316 - 22719.147: 84.5250% ( 71) 00:23:57.009 22719.147 - 22843.977: 85.4625% ( 75) 00:23:57.009 22843.977 - 22968.808: 86.5500% ( 87) 00:23:57.009 22968.808 - 23093.638: 88.1500% ( 128) 00:23:57.009 23093.638 - 23218.469: 89.3000% ( 92) 00:23:57.009 23218.469 - 23343.299: 90.3250% ( 82) 00:23:57.009 23343.299 - 23468.130: 91.1125% ( 63) 00:23:57.009 23468.130 - 23592.960: 91.9375% ( 66) 00:23:57.009 23592.960 - 23717.790: 92.6375% ( 56) 00:23:57.009 23717.790 - 23842.621: 93.2875% ( 52) 00:23:57.009 23842.621 - 23967.451: 93.9375% ( 52) 00:23:57.009 23967.451 - 24092.282: 94.5500% ( 49) 00:23:57.009 24092.282 - 24217.112: 95.1125% ( 45) 00:23:57.009 24217.112 - 24341.943: 95.7500% ( 51) 00:23:57.009 24341.943 - 24466.773: 96.3000% ( 44) 00:23:57.009 24466.773 - 24591.604: 96.8125% ( 41) 00:23:57.009 24591.604 - 24716.434: 97.1625% ( 28) 00:23:57.009 24716.434 - 24841.265: 97.5250% ( 29) 00:23:57.009 24841.265 - 24966.095: 97.8000% ( 22) 00:23:57.009 24966.095 - 25090.926: 97.9500% ( 12) 00:23:57.009 25090.926 - 25215.756: 98.1000% ( 12) 00:23:57.009 25215.756 - 25340.587: 98.2250% ( 10) 00:23:57.009 25340.587 - 25465.417: 98.3125% ( 7) 00:23:57.009 25465.417 - 25590.248: 98.3500% ( 3) 00:23:57.009 25590.248 - 25715.078: 98.3750% ( 2) 00:23:57.009 25715.078 - 25839.909: 98.4000% ( 2) 00:23:57.009 32955.246 - 33204.907: 98.4750% ( 6) 00:23:57.009 33204.907 - 33454.568: 98.5625% ( 7) 00:23:57.009 33454.568 - 33704.229: 98.6625% ( 8) 00:23:57.009 33704.229 - 33953.890: 98.7500% ( 7) 00:23:57.009 33953.890 - 34203.550: 98.8375% ( 7) 00:23:57.009 34203.550 - 34453.211: 98.9250% ( 7) 00:23:57.009 34453.211 - 34702.872: 99.0000% ( 6) 00:23:57.009 34702.872 - 34952.533: 99.1000% ( 8) 00:23:57.009 34952.533 - 35202.194: 99.1875% ( 7) 00:23:57.009 35202.194 - 35451.855: 99.2000% ( 1) 00:23:57.009 40694.735 - 40944.396: 99.2375% ( 3) 00:23:57.009 40944.396 - 41194.057: 99.3125% ( 6) 00:23:57.009 41194.057 - 41443.718: 99.4000% ( 7) 00:23:57.009 41443.718 - 41693.379: 99.4625% ( 5) 00:23:57.009 41693.379 - 41943.040: 99.5500% ( 7) 00:23:57.009 41943.040 - 42192.701: 99.6375% ( 7) 00:23:57.009 42192.701 - 42442.362: 99.7250% ( 7) 00:23:57.009 42442.362 - 42692.023: 99.8125% ( 7) 00:23:57.009 42692.023 - 42941.684: 99.9000% ( 7) 00:23:57.009 42941.684 - 43191.345: 99.9750% ( 6) 00:23:57.009 43191.345 - 43441.006: 100.0000% ( 2) 00:23:57.009 00:23:57.009 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:23:57.009 ============================================================================== 00:23:57.009 Range in us Cumulative IO count 00:23:57.009 11047.497 - 11109.912: 0.0125% ( 1) 00:23:57.009 11234.743 - 11297.158: 0.0625% ( 4) 00:23:57.009 11297.158 - 11359.573: 0.3000% ( 19) 00:23:57.009 11359.573 - 11421.989: 0.6875% ( 31) 00:23:57.009 11421.989 - 11484.404: 1.0625% ( 30) 00:23:57.009 11484.404 - 11546.819: 1.3625% ( 24) 00:23:57.009 11546.819 - 11609.234: 1.8750% ( 41) 00:23:57.009 11609.234 - 11671.650: 2.3250% ( 36) 00:23:57.009 11671.650 - 11734.065: 2.8250% ( 40) 00:23:57.009 11734.065 - 11796.480: 3.3250% ( 40) 00:23:57.009 11796.480 - 11858.895: 3.9250% ( 48) 00:23:57.009 11858.895 - 11921.310: 4.5500% ( 50) 00:23:57.009 11921.310 - 11983.726: 5.2125% ( 53) 00:23:57.009 11983.726 - 12046.141: 6.0500% ( 67) 00:23:57.009 12046.141 - 12108.556: 6.9375% ( 71) 00:23:57.009 12108.556 - 12170.971: 7.8875% ( 76) 00:23:57.009 12170.971 - 12233.387: 8.7375% ( 68) 00:23:57.009 12233.387 - 12295.802: 9.7500% ( 81) 00:23:57.009 12295.802 - 12358.217: 10.8125% ( 85) 00:23:57.009 12358.217 - 12420.632: 12.1125% ( 104) 00:23:57.009 12420.632 - 12483.048: 13.6750% ( 125) 00:23:57.009 12483.048 - 12545.463: 15.2250% ( 124) 00:23:57.009 12545.463 - 12607.878: 16.6000% ( 110) 00:23:57.009 12607.878 - 12670.293: 17.9625% ( 109) 00:23:57.009 12670.293 - 12732.709: 19.2125% ( 100) 00:23:57.009 12732.709 - 12795.124: 20.5125% ( 104) 00:23:57.009 12795.124 - 12857.539: 21.8750% ( 109) 00:23:57.009 12857.539 - 12919.954: 23.3000% ( 114) 00:23:57.009 12919.954 - 12982.370: 24.5500% ( 100) 00:23:57.009 12982.370 - 13044.785: 26.2125% ( 133) 00:23:57.009 13044.785 - 13107.200: 27.5750% ( 109) 00:23:57.009 13107.200 - 13169.615: 28.9375% ( 109) 00:23:57.009 13169.615 - 13232.030: 30.4250% ( 119) 00:23:57.009 13232.030 - 13294.446: 32.0000% ( 126) 00:23:57.009 13294.446 - 13356.861: 33.6750% ( 134) 00:23:57.009 13356.861 - 13419.276: 35.5375% ( 149) 00:23:57.009 13419.276 - 13481.691: 37.3625% ( 146) 00:23:57.009 13481.691 - 13544.107: 39.0250% ( 133) 00:23:57.009 13544.107 - 13606.522: 40.5250% ( 120) 00:23:57.009 13606.522 - 13668.937: 41.7875% ( 101) 00:23:57.009 13668.937 - 13731.352: 43.1875% ( 112) 00:23:57.009 13731.352 - 13793.768: 44.7000% ( 121) 00:23:57.009 13793.768 - 13856.183: 46.0000% ( 104) 00:23:57.009 13856.183 - 13918.598: 47.0125% ( 81) 00:23:57.009 13918.598 - 13981.013: 47.9875% ( 78) 00:23:57.009 13981.013 - 14043.429: 48.8250% ( 67) 00:23:57.009 14043.429 - 14105.844: 49.8625% ( 83) 00:23:57.009 14105.844 - 14168.259: 50.8375% ( 78) 00:23:57.009 14168.259 - 14230.674: 51.9000% ( 85) 00:23:57.009 14230.674 - 14293.090: 53.0250% ( 90) 00:23:57.009 14293.090 - 14355.505: 54.1625% ( 91) 00:23:57.009 14355.505 - 14417.920: 55.3000% ( 91) 00:23:57.009 14417.920 - 14480.335: 56.3125% ( 81) 00:23:57.009 14480.335 - 14542.750: 57.2000% ( 71) 00:23:57.009 14542.750 - 14605.166: 58.1000% ( 72) 00:23:57.009 14605.166 - 14667.581: 58.8250% ( 58) 00:23:57.009 14667.581 - 14729.996: 59.5000% ( 54) 00:23:57.009 14729.996 - 14792.411: 60.1250% ( 50) 00:23:57.009 14792.411 - 14854.827: 60.8000% ( 54) 00:23:57.009 14854.827 - 14917.242: 61.4750% ( 54) 00:23:57.009 14917.242 - 14979.657: 62.0000% ( 42) 00:23:57.009 14979.657 - 15042.072: 62.4750% ( 38) 00:23:57.009 15042.072 - 15104.488: 63.1500% ( 54) 00:23:57.009 15104.488 - 15166.903: 63.7500% ( 48) 00:23:57.009 15166.903 - 15229.318: 64.2875% ( 43) 00:23:57.009 15229.318 - 15291.733: 64.7250% ( 35) 00:23:57.009 15291.733 - 15354.149: 65.1750% ( 36) 00:23:57.009 15354.149 - 15416.564: 65.6000% ( 34) 00:23:57.009 15416.564 - 15478.979: 65.9125% ( 25) 00:23:57.009 15478.979 - 15541.394: 66.2000% ( 23) 00:23:57.009 15541.394 - 15603.810: 66.4000% ( 16) 00:23:57.009 15603.810 - 15666.225: 66.6250% ( 18) 00:23:57.009 15666.225 - 15728.640: 66.9250% ( 24) 00:23:57.009 15728.640 - 15791.055: 67.2250% ( 24) 00:23:57.009 15791.055 - 15853.470: 67.5750% ( 28) 00:23:57.009 15853.470 - 15915.886: 67.9375% ( 29) 00:23:57.009 15915.886 - 15978.301: 68.5000% ( 45) 00:23:57.009 15978.301 - 16103.131: 68.9750% ( 38) 00:23:57.009 16103.131 - 16227.962: 69.4125% ( 35) 00:23:57.009 16227.962 - 16352.792: 69.8125% ( 32) 00:23:57.009 16352.792 - 16477.623: 70.1375% ( 26) 00:23:57.009 16477.623 - 16602.453: 70.5375% ( 32) 00:23:57.009 16602.453 - 16727.284: 71.1375% ( 48) 00:23:57.009 16727.284 - 16852.114: 71.5000% ( 29) 00:23:57.009 16852.114 - 16976.945: 71.7250% ( 18) 00:23:57.009 16976.945 - 17101.775: 71.9750% ( 20) 00:23:57.009 17101.775 - 17226.606: 72.3625% ( 31) 00:23:57.009 17226.606 - 17351.436: 72.9250% ( 45) 00:23:57.009 17351.436 - 17476.267: 73.5000% ( 46) 00:23:57.009 17476.267 - 17601.097: 73.8000% ( 24) 00:23:57.010 17601.097 - 17725.928: 74.0625% ( 21) 00:23:57.010 17725.928 - 17850.758: 74.4375% ( 30) 00:23:57.010 17850.758 - 17975.589: 74.8000% ( 29) 00:23:57.010 17975.589 - 18100.419: 75.0375% ( 19) 00:23:57.010 18100.419 - 18225.250: 75.2500% ( 17) 00:23:57.010 18225.250 - 18350.080: 75.4500% ( 16) 00:23:57.010 18350.080 - 18474.910: 75.6125% ( 13) 00:23:57.010 18474.910 - 18599.741: 75.7625% ( 12) 00:23:57.010 18599.741 - 18724.571: 75.9375% ( 14) 00:23:57.010 18724.571 - 18849.402: 76.1375% ( 16) 00:23:57.010 18849.402 - 18974.232: 76.3125% ( 14) 00:23:57.010 18974.232 - 19099.063: 76.4375% ( 10) 00:23:57.010 19099.063 - 19223.893: 76.5625% ( 10) 00:23:57.010 19223.893 - 19348.724: 76.7000% ( 11) 00:23:57.010 19348.724 - 19473.554: 76.9875% ( 23) 00:23:57.010 19473.554 - 19598.385: 77.3250% ( 27) 00:23:57.010 19598.385 - 19723.215: 77.6875% ( 29) 00:23:57.010 19723.215 - 19848.046: 78.2500% ( 45) 00:23:57.010 19848.046 - 19972.876: 78.7625% ( 41) 00:23:57.010 19972.876 - 20097.707: 79.0625% ( 24) 00:23:57.010 20097.707 - 20222.537: 79.3000% ( 19) 00:23:57.010 20222.537 - 20347.368: 79.4625% ( 13) 00:23:57.010 20347.368 - 20472.198: 79.6500% ( 15) 00:23:57.010 20472.198 - 20597.029: 79.8500% ( 16) 00:23:57.010 20597.029 - 20721.859: 79.9875% ( 11) 00:23:57.010 20721.859 - 20846.690: 80.1125% ( 10) 00:23:57.010 20846.690 - 20971.520: 80.1625% ( 4) 00:23:57.010 20971.520 - 21096.350: 80.2125% ( 4) 00:23:57.010 21096.350 - 21221.181: 80.2500% ( 3) 00:23:57.010 21221.181 - 21346.011: 80.3125% ( 5) 00:23:57.010 21346.011 - 21470.842: 80.4500% ( 11) 00:23:57.010 21470.842 - 21595.672: 80.5875% ( 11) 00:23:57.010 21595.672 - 21720.503: 80.7625% ( 14) 00:23:57.010 21720.503 - 21845.333: 81.1000% ( 27) 00:23:57.010 21845.333 - 21970.164: 81.5875% ( 39) 00:23:57.010 21970.164 - 22094.994: 82.2500% ( 53) 00:23:57.010 22094.994 - 22219.825: 83.1250% ( 70) 00:23:57.010 22219.825 - 22344.655: 83.9000% ( 62) 00:23:57.010 22344.655 - 22469.486: 84.7750% ( 70) 00:23:57.010 22469.486 - 22594.316: 85.6125% ( 67) 00:23:57.010 22594.316 - 22719.147: 86.4750% ( 69) 00:23:57.010 22719.147 - 22843.977: 87.3125% ( 67) 00:23:57.010 22843.977 - 22968.808: 88.2625% ( 76) 00:23:57.010 22968.808 - 23093.638: 89.0750% ( 65) 00:23:57.010 23093.638 - 23218.469: 89.8625% ( 63) 00:23:57.010 23218.469 - 23343.299: 91.0125% ( 92) 00:23:57.010 23343.299 - 23468.130: 91.5500% ( 43) 00:23:57.010 23468.130 - 23592.960: 92.3625% ( 65) 00:23:57.010 23592.960 - 23717.790: 92.9500% ( 47) 00:23:57.010 23717.790 - 23842.621: 93.4500% ( 40) 00:23:57.010 23842.621 - 23967.451: 94.1250% ( 54) 00:23:57.010 23967.451 - 24092.282: 94.8250% ( 56) 00:23:57.010 24092.282 - 24217.112: 95.3625% ( 43) 00:23:57.010 24217.112 - 24341.943: 95.8625% ( 40) 00:23:57.010 24341.943 - 24466.773: 96.3500% ( 39) 00:23:57.010 24466.773 - 24591.604: 96.8375% ( 39) 00:23:57.010 24591.604 - 24716.434: 97.2750% ( 35) 00:23:57.010 24716.434 - 24841.265: 97.6250% ( 28) 00:23:57.010 24841.265 - 24966.095: 97.8500% ( 18) 00:23:57.010 24966.095 - 25090.926: 98.0500% ( 16) 00:23:57.010 25090.926 - 25215.756: 98.1375% ( 7) 00:23:57.010 25215.756 - 25340.587: 98.2250% ( 7) 00:23:57.010 25340.587 - 25465.417: 98.3000% ( 6) 00:23:57.010 25465.417 - 25590.248: 98.3375% ( 3) 00:23:57.010 25590.248 - 25715.078: 98.4000% ( 5) 00:23:57.010 31831.771 - 31956.602: 98.4375% ( 3) 00:23:57.010 31956.602 - 32206.263: 98.5125% ( 6) 00:23:57.010 32206.263 - 32455.924: 98.5875% ( 6) 00:23:57.010 32455.924 - 32705.585: 98.6750% ( 7) 00:23:57.010 32705.585 - 32955.246: 98.7625% ( 7) 00:23:57.010 32955.246 - 33204.907: 98.8500% ( 7) 00:23:57.010 33204.907 - 33454.568: 98.9375% ( 7) 00:23:57.010 33454.568 - 33704.229: 99.0250% ( 7) 00:23:57.010 33704.229 - 33953.890: 99.1125% ( 7) 00:23:57.010 33953.890 - 34203.550: 99.2000% ( 7) 00:23:57.010 42442.362 - 42692.023: 99.2500% ( 4) 00:23:57.010 42692.023 - 42941.684: 99.3500% ( 8) 00:23:57.010 42941.684 - 43191.345: 99.4375% ( 7) 00:23:57.010 43191.345 - 43441.006: 99.5250% ( 7) 00:23:57.010 43441.006 - 43690.667: 99.6250% ( 8) 00:23:57.010 43690.667 - 43940.328: 99.7250% ( 8) 00:23:57.010 43940.328 - 44189.989: 99.8375% ( 9) 00:23:57.010 44189.989 - 44439.650: 99.9250% ( 7) 00:23:57.010 44439.650 - 44689.310: 100.0000% ( 6) 00:23:57.010 00:23:57.010 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:23:57.010 ============================================================================== 00:23:57.010 Range in us Cumulative IO count 00:23:57.010 11109.912 - 11172.328: 0.0250% ( 2) 00:23:57.010 11172.328 - 11234.743: 0.1375% ( 9) 00:23:57.010 11234.743 - 11297.158: 0.2500% ( 9) 00:23:57.010 11297.158 - 11359.573: 0.4500% ( 16) 00:23:57.010 11359.573 - 11421.989: 0.6500% ( 16) 00:23:57.010 11421.989 - 11484.404: 1.0750% ( 34) 00:23:57.010 11484.404 - 11546.819: 1.5875% ( 41) 00:23:57.010 11546.819 - 11609.234: 2.0625% ( 38) 00:23:57.010 11609.234 - 11671.650: 2.6875% ( 50) 00:23:57.010 11671.650 - 11734.065: 3.5875% ( 72) 00:23:57.010 11734.065 - 11796.480: 4.1875% ( 48) 00:23:57.010 11796.480 - 11858.895: 4.8000% ( 49) 00:23:57.010 11858.895 - 11921.310: 5.3500% ( 44) 00:23:57.010 11921.310 - 11983.726: 5.8000% ( 36) 00:23:57.010 11983.726 - 12046.141: 6.3250% ( 42) 00:23:57.010 12046.141 - 12108.556: 6.9375% ( 49) 00:23:57.010 12108.556 - 12170.971: 7.6875% ( 60) 00:23:57.010 12170.971 - 12233.387: 8.4625% ( 62) 00:23:57.010 12233.387 - 12295.802: 9.2750% ( 65) 00:23:57.010 12295.802 - 12358.217: 10.1750% ( 72) 00:23:57.010 12358.217 - 12420.632: 11.2375% ( 85) 00:23:57.010 12420.632 - 12483.048: 12.2625% ( 82) 00:23:57.010 12483.048 - 12545.463: 13.4250% ( 93) 00:23:57.010 12545.463 - 12607.878: 14.6625% ( 99) 00:23:57.010 12607.878 - 12670.293: 16.3625% ( 136) 00:23:57.010 12670.293 - 12732.709: 17.7625% ( 112) 00:23:57.010 12732.709 - 12795.124: 19.4000% ( 131) 00:23:57.010 12795.124 - 12857.539: 21.1875% ( 143) 00:23:57.010 12857.539 - 12919.954: 22.8375% ( 132) 00:23:57.010 12919.954 - 12982.370: 24.4125% ( 126) 00:23:57.010 12982.370 - 13044.785: 25.9750% ( 125) 00:23:57.010 13044.785 - 13107.200: 27.9750% ( 160) 00:23:57.010 13107.200 - 13169.615: 29.8250% ( 148) 00:23:57.010 13169.615 - 13232.030: 31.5375% ( 137) 00:23:57.010 13232.030 - 13294.446: 33.1250% ( 127) 00:23:57.010 13294.446 - 13356.861: 34.6625% ( 123) 00:23:57.010 13356.861 - 13419.276: 36.0250% ( 109) 00:23:57.010 13419.276 - 13481.691: 37.5125% ( 119) 00:23:57.010 13481.691 - 13544.107: 38.8000% ( 103) 00:23:57.010 13544.107 - 13606.522: 40.1250% ( 106) 00:23:57.010 13606.522 - 13668.937: 41.5000% ( 110) 00:23:57.010 13668.937 - 13731.352: 42.6875% ( 95) 00:23:57.010 13731.352 - 13793.768: 43.9250% ( 99) 00:23:57.010 13793.768 - 13856.183: 45.0125% ( 87) 00:23:57.010 13856.183 - 13918.598: 46.0750% ( 85) 00:23:57.010 13918.598 - 13981.013: 47.2125% ( 91) 00:23:57.010 13981.013 - 14043.429: 48.2500% ( 83) 00:23:57.010 14043.429 - 14105.844: 49.2375% ( 79) 00:23:57.010 14105.844 - 14168.259: 50.1125% ( 70) 00:23:57.010 14168.259 - 14230.674: 50.9125% ( 64) 00:23:57.010 14230.674 - 14293.090: 51.9000% ( 79) 00:23:57.010 14293.090 - 14355.505: 52.8500% ( 76) 00:23:57.010 14355.505 - 14417.920: 53.9375% ( 87) 00:23:57.010 14417.920 - 14480.335: 55.3000% ( 109) 00:23:57.010 14480.335 - 14542.750: 56.3125% ( 81) 00:23:57.010 14542.750 - 14605.166: 57.3000% ( 79) 00:23:57.010 14605.166 - 14667.581: 58.1250% ( 66) 00:23:57.010 14667.581 - 14729.996: 59.0375% ( 73) 00:23:57.010 14729.996 - 14792.411: 59.8250% ( 63) 00:23:57.010 14792.411 - 14854.827: 60.5750% ( 60) 00:23:57.010 14854.827 - 14917.242: 61.1875% ( 49) 00:23:57.010 14917.242 - 14979.657: 61.6625% ( 38) 00:23:57.010 14979.657 - 15042.072: 62.0625% ( 32) 00:23:57.010 15042.072 - 15104.488: 62.5125% ( 36) 00:23:57.010 15104.488 - 15166.903: 62.9875% ( 38) 00:23:57.010 15166.903 - 15229.318: 63.4000% ( 33) 00:23:57.010 15229.318 - 15291.733: 63.8500% ( 36) 00:23:57.010 15291.733 - 15354.149: 64.3250% ( 38) 00:23:57.010 15354.149 - 15416.564: 64.7375% ( 33) 00:23:57.010 15416.564 - 15478.979: 65.0500% ( 25) 00:23:57.010 15478.979 - 15541.394: 65.3250% ( 22) 00:23:57.010 15541.394 - 15603.810: 65.6625% ( 27) 00:23:57.010 15603.810 - 15666.225: 66.0000% ( 27) 00:23:57.010 15666.225 - 15728.640: 66.3250% ( 26) 00:23:57.010 15728.640 - 15791.055: 66.6500% ( 26) 00:23:57.010 15791.055 - 15853.470: 66.9250% ( 22) 00:23:57.010 15853.470 - 15915.886: 67.2375% ( 25) 00:23:57.010 15915.886 - 15978.301: 67.6125% ( 30) 00:23:57.010 15978.301 - 16103.131: 68.1000% ( 39) 00:23:57.010 16103.131 - 16227.962: 68.4375% ( 27) 00:23:57.010 16227.962 - 16352.792: 68.8500% ( 33) 00:23:57.010 16352.792 - 16477.623: 69.3375% ( 39) 00:23:57.010 16477.623 - 16602.453: 70.0750% ( 59) 00:23:57.010 16602.453 - 16727.284: 70.8000% ( 58) 00:23:57.010 16727.284 - 16852.114: 71.3750% ( 46) 00:23:57.010 16852.114 - 16976.945: 71.8000% ( 34) 00:23:57.010 16976.945 - 17101.775: 72.1000% ( 24) 00:23:57.010 17101.775 - 17226.606: 72.7750% ( 54) 00:23:57.010 17226.606 - 17351.436: 73.3500% ( 46) 00:23:57.010 17351.436 - 17476.267: 73.9375% ( 47) 00:23:57.010 17476.267 - 17601.097: 74.5125% ( 46) 00:23:57.010 17601.097 - 17725.928: 75.0375% ( 42) 00:23:57.010 17725.928 - 17850.758: 75.5125% ( 38) 00:23:57.010 17850.758 - 17975.589: 75.9125% ( 32) 00:23:57.010 17975.589 - 18100.419: 76.2000% ( 23) 00:23:57.010 18100.419 - 18225.250: 76.5875% ( 31) 00:23:57.010 18225.250 - 18350.080: 77.0625% ( 38) 00:23:57.010 18350.080 - 18474.910: 77.2000% ( 11) 00:23:57.011 18474.910 - 18599.741: 77.2375% ( 3) 00:23:57.011 18599.741 - 18724.571: 77.2750% ( 3) 00:23:57.011 18724.571 - 18849.402: 77.3000% ( 2) 00:23:57.011 18849.402 - 18974.232: 77.3375% ( 3) 00:23:57.011 18974.232 - 19099.063: 77.3750% ( 3) 00:23:57.011 19099.063 - 19223.893: 77.4000% ( 2) 00:23:57.011 19223.893 - 19348.724: 77.5375% ( 11) 00:23:57.011 19348.724 - 19473.554: 77.7250% ( 15) 00:23:57.011 19473.554 - 19598.385: 77.8625% ( 11) 00:23:57.011 19598.385 - 19723.215: 77.9875% ( 10) 00:23:57.011 19723.215 - 19848.046: 78.2250% ( 19) 00:23:57.011 19848.046 - 19972.876: 78.5000% ( 22) 00:23:57.011 19972.876 - 20097.707: 78.7625% ( 21) 00:23:57.011 20097.707 - 20222.537: 78.9875% ( 18) 00:23:57.011 20222.537 - 20347.368: 79.1875% ( 16) 00:23:57.011 20347.368 - 20472.198: 79.3375% ( 12) 00:23:57.011 20472.198 - 20597.029: 79.5250% ( 15) 00:23:57.011 20597.029 - 20721.859: 79.6375% ( 9) 00:23:57.011 20721.859 - 20846.690: 79.7375% ( 8) 00:23:57.011 20846.690 - 20971.520: 79.8250% ( 7) 00:23:57.011 20971.520 - 21096.350: 79.9375% ( 9) 00:23:57.011 21096.350 - 21221.181: 79.9750% ( 3) 00:23:57.011 21221.181 - 21346.011: 80.0500% ( 6) 00:23:57.011 21346.011 - 21470.842: 80.1000% ( 4) 00:23:57.011 21470.842 - 21595.672: 80.1750% ( 6) 00:23:57.011 21595.672 - 21720.503: 80.3000% ( 10) 00:23:57.011 21720.503 - 21845.333: 80.4750% ( 14) 00:23:57.011 21845.333 - 21970.164: 80.9875% ( 41) 00:23:57.011 21970.164 - 22094.994: 81.5125% ( 42) 00:23:57.011 22094.994 - 22219.825: 82.4000% ( 71) 00:23:57.011 22219.825 - 22344.655: 83.2375% ( 67) 00:23:57.011 22344.655 - 22469.486: 84.0375% ( 64) 00:23:57.011 22469.486 - 22594.316: 84.7750% ( 59) 00:23:57.011 22594.316 - 22719.147: 85.5000% ( 58) 00:23:57.011 22719.147 - 22843.977: 86.2875% ( 63) 00:23:57.011 22843.977 - 22968.808: 87.4375% ( 92) 00:23:57.011 22968.808 - 23093.638: 88.9125% ( 118) 00:23:57.011 23093.638 - 23218.469: 90.2500% ( 107) 00:23:57.011 23218.469 - 23343.299: 91.1375% ( 71) 00:23:57.011 23343.299 - 23468.130: 91.8375% ( 56) 00:23:57.011 23468.130 - 23592.960: 92.9125% ( 86) 00:23:57.011 23592.960 - 23717.790: 93.5250% ( 49) 00:23:57.011 23717.790 - 23842.621: 94.1875% ( 53) 00:23:57.011 23842.621 - 23967.451: 94.8500% ( 53) 00:23:57.011 23967.451 - 24092.282: 95.3500% ( 40) 00:23:57.011 24092.282 - 24217.112: 95.8500% ( 40) 00:23:57.011 24217.112 - 24341.943: 96.3125% ( 37) 00:23:57.011 24341.943 - 24466.773: 96.7625% ( 36) 00:23:57.011 24466.773 - 24591.604: 97.1375% ( 30) 00:23:57.011 24591.604 - 24716.434: 97.5375% ( 32) 00:23:57.011 24716.434 - 24841.265: 97.8750% ( 27) 00:23:57.011 24841.265 - 24966.095: 98.0875% ( 17) 00:23:57.011 24966.095 - 25090.926: 98.2500% ( 13) 00:23:57.011 25090.926 - 25215.756: 98.3000% ( 4) 00:23:57.011 25215.756 - 25340.587: 98.3750% ( 6) 00:23:57.011 25340.587 - 25465.417: 98.4000% ( 2) 00:23:57.011 29584.823 - 29709.653: 98.4250% ( 2) 00:23:57.011 29709.653 - 29834.484: 98.4625% ( 3) 00:23:57.011 29834.484 - 29959.314: 98.5000% ( 3) 00:23:57.011 29959.314 - 30084.145: 98.5375% ( 3) 00:23:57.011 30084.145 - 30208.975: 98.5875% ( 4) 00:23:57.011 30208.975 - 30333.806: 98.6250% ( 3) 00:23:57.011 30333.806 - 30458.636: 98.6750% ( 4) 00:23:57.011 30458.636 - 30583.467: 98.7250% ( 4) 00:23:57.011 30583.467 - 30708.297: 98.7625% ( 3) 00:23:57.011 30708.297 - 30833.128: 98.8125% ( 4) 00:23:57.011 30833.128 - 30957.958: 98.8625% ( 4) 00:23:57.011 30957.958 - 31082.789: 98.9000% ( 3) 00:23:57.011 31082.789 - 31207.619: 98.9375% ( 3) 00:23:57.011 31207.619 - 31332.450: 98.9750% ( 3) 00:23:57.011 31332.450 - 31457.280: 99.0125% ( 3) 00:23:57.011 31457.280 - 31582.110: 99.0500% ( 3) 00:23:57.011 31582.110 - 31706.941: 99.0875% ( 3) 00:23:57.011 31706.941 - 31831.771: 99.1250% ( 3) 00:23:57.011 31831.771 - 31956.602: 99.1625% ( 3) 00:23:57.011 31956.602 - 32206.263: 99.2000% ( 3) 00:23:57.011 40445.074 - 40694.735: 99.2750% ( 6) 00:23:57.011 40694.735 - 40944.396: 99.3750% ( 8) 00:23:57.011 40944.396 - 41194.057: 99.4750% ( 8) 00:23:57.011 41194.057 - 41443.718: 99.5750% ( 8) 00:23:57.011 41443.718 - 41693.379: 99.6750% ( 8) 00:23:57.011 41693.379 - 41943.040: 99.7625% ( 7) 00:23:57.011 41943.040 - 42192.701: 99.8500% ( 7) 00:23:57.011 42192.701 - 42442.362: 99.9250% ( 6) 00:23:57.011 42442.362 - 42692.023: 100.0000% ( 6) 00:23:57.011 00:23:57.011 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:23:57.011 ============================================================================== 00:23:57.011 Range in us Cumulative IO count 00:23:57.011 11047.497 - 11109.912: 0.0875% ( 7) 00:23:57.011 11109.912 - 11172.328: 0.1625% ( 6) 00:23:57.011 11172.328 - 11234.743: 0.2875% ( 10) 00:23:57.011 11234.743 - 11297.158: 0.4750% ( 15) 00:23:57.011 11297.158 - 11359.573: 0.9125% ( 35) 00:23:57.011 11359.573 - 11421.989: 1.4125% ( 40) 00:23:57.011 11421.989 - 11484.404: 1.9250% ( 41) 00:23:57.011 11484.404 - 11546.819: 2.3000% ( 30) 00:23:57.011 11546.819 - 11609.234: 2.9500% ( 52) 00:23:57.011 11609.234 - 11671.650: 3.5750% ( 50) 00:23:57.011 11671.650 - 11734.065: 4.3375% ( 61) 00:23:57.011 11734.065 - 11796.480: 5.1250% ( 63) 00:23:57.011 11796.480 - 11858.895: 5.7500% ( 50) 00:23:57.011 11858.895 - 11921.310: 6.2125% ( 37) 00:23:57.011 11921.310 - 11983.726: 6.6625% ( 36) 00:23:57.011 11983.726 - 12046.141: 7.2125% ( 44) 00:23:57.011 12046.141 - 12108.556: 7.7875% ( 46) 00:23:57.011 12108.556 - 12170.971: 8.3750% ( 47) 00:23:57.011 12170.971 - 12233.387: 9.0125% ( 51) 00:23:57.011 12233.387 - 12295.802: 9.6875% ( 54) 00:23:57.011 12295.802 - 12358.217: 10.8000% ( 89) 00:23:57.011 12358.217 - 12420.632: 11.7750% ( 78) 00:23:57.011 12420.632 - 12483.048: 12.8750% ( 88) 00:23:57.011 12483.048 - 12545.463: 14.1000% ( 98) 00:23:57.011 12545.463 - 12607.878: 15.3500% ( 100) 00:23:57.011 12607.878 - 12670.293: 16.5375% ( 95) 00:23:57.011 12670.293 - 12732.709: 17.7500% ( 97) 00:23:57.011 12732.709 - 12795.124: 19.0875% ( 107) 00:23:57.011 12795.124 - 12857.539: 20.6625% ( 126) 00:23:57.011 12857.539 - 12919.954: 22.1125% ( 116) 00:23:57.011 12919.954 - 12982.370: 23.6125% ( 120) 00:23:57.011 12982.370 - 13044.785: 25.1500% ( 123) 00:23:57.011 13044.785 - 13107.200: 26.7375% ( 127) 00:23:57.011 13107.200 - 13169.615: 28.5250% ( 143) 00:23:57.011 13169.615 - 13232.030: 30.4250% ( 152) 00:23:57.011 13232.030 - 13294.446: 31.8500% ( 114) 00:23:57.011 13294.446 - 13356.861: 33.1625% ( 105) 00:23:57.011 13356.861 - 13419.276: 34.7750% ( 129) 00:23:57.011 13419.276 - 13481.691: 36.0250% ( 100) 00:23:57.011 13481.691 - 13544.107: 37.4625% ( 115) 00:23:57.011 13544.107 - 13606.522: 38.6375% ( 94) 00:23:57.011 13606.522 - 13668.937: 39.8375% ( 96) 00:23:57.011 13668.937 - 13731.352: 41.0625% ( 98) 00:23:57.011 13731.352 - 13793.768: 42.4375% ( 110) 00:23:57.011 13793.768 - 13856.183: 43.7000% ( 101) 00:23:57.011 13856.183 - 13918.598: 45.1125% ( 113) 00:23:57.011 13918.598 - 13981.013: 46.3750% ( 101) 00:23:57.011 13981.013 - 14043.429: 47.5125% ( 91) 00:23:57.011 14043.429 - 14105.844: 48.7625% ( 100) 00:23:57.011 14105.844 - 14168.259: 49.9750% ( 97) 00:23:57.011 14168.259 - 14230.674: 51.1000% ( 90) 00:23:57.011 14230.674 - 14293.090: 52.2375% ( 91) 00:23:57.011 14293.090 - 14355.505: 53.1250% ( 71) 00:23:57.011 14355.505 - 14417.920: 54.0875% ( 77) 00:23:57.011 14417.920 - 14480.335: 54.9625% ( 70) 00:23:57.011 14480.335 - 14542.750: 55.8250% ( 69) 00:23:57.011 14542.750 - 14605.166: 56.7500% ( 74) 00:23:57.011 14605.166 - 14667.581: 57.6625% ( 73) 00:23:57.011 14667.581 - 14729.996: 58.6375% ( 78) 00:23:57.011 14729.996 - 14792.411: 59.4250% ( 63) 00:23:57.011 14792.411 - 14854.827: 60.1125% ( 55) 00:23:57.011 14854.827 - 14917.242: 60.6500% ( 43) 00:23:57.011 14917.242 - 14979.657: 61.1500% ( 40) 00:23:57.011 14979.657 - 15042.072: 61.5375% ( 31) 00:23:57.011 15042.072 - 15104.488: 61.9250% ( 31) 00:23:57.011 15104.488 - 15166.903: 62.3000% ( 30) 00:23:57.011 15166.903 - 15229.318: 62.7375% ( 35) 00:23:57.011 15229.318 - 15291.733: 63.4375% ( 56) 00:23:57.011 15291.733 - 15354.149: 63.9625% ( 42) 00:23:57.012 15354.149 - 15416.564: 64.4125% ( 36) 00:23:57.012 15416.564 - 15478.979: 64.9000% ( 39) 00:23:57.012 15478.979 - 15541.394: 65.3625% ( 37) 00:23:57.012 15541.394 - 15603.810: 65.7375% ( 30) 00:23:57.012 15603.810 - 15666.225: 66.0125% ( 22) 00:23:57.012 15666.225 - 15728.640: 66.3500% ( 27) 00:23:57.012 15728.640 - 15791.055: 66.6625% ( 25) 00:23:57.012 15791.055 - 15853.470: 67.0000% ( 27) 00:23:57.012 15853.470 - 15915.886: 67.4375% ( 35) 00:23:57.012 15915.886 - 15978.301: 67.6875% ( 20) 00:23:57.012 15978.301 - 16103.131: 68.1000% ( 33) 00:23:57.012 16103.131 - 16227.962: 68.5125% ( 33) 00:23:57.012 16227.962 - 16352.792: 68.8500% ( 27) 00:23:57.012 16352.792 - 16477.623: 69.2750% ( 34) 00:23:57.012 16477.623 - 16602.453: 69.8375% ( 45) 00:23:57.012 16602.453 - 16727.284: 70.5625% ( 58) 00:23:57.012 16727.284 - 16852.114: 71.8125% ( 100) 00:23:57.012 16852.114 - 16976.945: 72.5000% ( 55) 00:23:57.012 16976.945 - 17101.775: 73.0375% ( 43) 00:23:57.012 17101.775 - 17226.606: 73.4500% ( 33) 00:23:57.012 17226.606 - 17351.436: 73.8500% ( 32) 00:23:57.012 17351.436 - 17476.267: 74.4875% ( 51) 00:23:57.012 17476.267 - 17601.097: 75.2125% ( 58) 00:23:57.012 17601.097 - 17725.928: 75.8375% ( 50) 00:23:57.012 17725.928 - 17850.758: 76.1500% ( 25) 00:23:57.012 17850.758 - 17975.589: 76.4125% ( 21) 00:23:57.012 17975.589 - 18100.419: 76.6125% ( 16) 00:23:57.012 18100.419 - 18225.250: 76.8000% ( 15) 00:23:57.012 18225.250 - 18350.080: 76.9625% ( 13) 00:23:57.012 18350.080 - 18474.910: 77.0750% ( 9) 00:23:57.012 18474.910 - 18599.741: 77.2750% ( 16) 00:23:57.012 18599.741 - 18724.571: 77.4625% ( 15) 00:23:57.012 18724.571 - 18849.402: 77.6000% ( 11) 00:23:57.012 18849.402 - 18974.232: 77.7625% ( 13) 00:23:57.012 18974.232 - 19099.063: 77.9125% ( 12) 00:23:57.012 19099.063 - 19223.893: 78.0125% ( 8) 00:23:57.012 19223.893 - 19348.724: 78.0750% ( 5) 00:23:57.012 19348.724 - 19473.554: 78.1375% ( 5) 00:23:57.012 19473.554 - 19598.385: 78.2125% ( 6) 00:23:57.012 19598.385 - 19723.215: 78.2875% ( 6) 00:23:57.012 19723.215 - 19848.046: 78.4125% ( 10) 00:23:57.012 19848.046 - 19972.876: 78.6000% ( 15) 00:23:57.012 19972.876 - 20097.707: 78.7375% ( 11) 00:23:57.012 20097.707 - 20222.537: 78.8000% ( 5) 00:23:57.012 20222.537 - 20347.368: 78.8625% ( 5) 00:23:57.012 20347.368 - 20472.198: 78.9500% ( 7) 00:23:57.012 20472.198 - 20597.029: 79.0375% ( 7) 00:23:57.012 20597.029 - 20721.859: 79.1625% ( 10) 00:23:57.012 20721.859 - 20846.690: 79.2750% ( 9) 00:23:57.012 20846.690 - 20971.520: 79.3375% ( 5) 00:23:57.012 20971.520 - 21096.350: 79.3750% ( 3) 00:23:57.012 21096.350 - 21221.181: 79.4375% ( 5) 00:23:57.012 21221.181 - 21346.011: 79.5000% ( 5) 00:23:57.012 21346.011 - 21470.842: 79.5500% ( 4) 00:23:57.012 21470.842 - 21595.672: 79.6000% ( 4) 00:23:57.012 21595.672 - 21720.503: 79.6625% ( 5) 00:23:57.012 21720.503 - 21845.333: 79.8125% ( 12) 00:23:57.012 21845.333 - 21970.164: 80.1375% ( 26) 00:23:57.012 21970.164 - 22094.994: 80.7500% ( 49) 00:23:57.012 22094.994 - 22219.825: 81.6375% ( 71) 00:23:57.012 22219.825 - 22344.655: 82.6250% ( 79) 00:23:57.012 22344.655 - 22469.486: 83.8500% ( 98) 00:23:57.012 22469.486 - 22594.316: 85.0375% ( 95) 00:23:57.012 22594.316 - 22719.147: 86.1000% ( 85) 00:23:57.012 22719.147 - 22843.977: 87.2375% ( 91) 00:23:57.012 22843.977 - 22968.808: 88.3375% ( 88) 00:23:57.012 22968.808 - 23093.638: 89.5375% ( 96) 00:23:57.012 23093.638 - 23218.469: 90.6250% ( 87) 00:23:57.012 23218.469 - 23343.299: 91.3375% ( 57) 00:23:57.012 23343.299 - 23468.130: 92.0000% ( 53) 00:23:57.012 23468.130 - 23592.960: 92.6125% ( 49) 00:23:57.012 23592.960 - 23717.790: 93.3875% ( 62) 00:23:57.012 23717.790 - 23842.621: 93.9375% ( 44) 00:23:57.012 23842.621 - 23967.451: 94.4625% ( 42) 00:23:57.012 23967.451 - 24092.282: 95.1000% ( 51) 00:23:57.012 24092.282 - 24217.112: 95.6000% ( 40) 00:23:57.012 24217.112 - 24341.943: 96.0875% ( 39) 00:23:57.012 24341.943 - 24466.773: 96.5625% ( 38) 00:23:57.012 24466.773 - 24591.604: 96.9625% ( 32) 00:23:57.012 24591.604 - 24716.434: 97.3250% ( 29) 00:23:57.012 24716.434 - 24841.265: 97.6000% ( 22) 00:23:57.012 24841.265 - 24966.095: 97.8875% ( 23) 00:23:57.012 24966.095 - 25090.926: 98.0625% ( 14) 00:23:57.012 25090.926 - 25215.756: 98.1875% ( 10) 00:23:57.012 25215.756 - 25340.587: 98.2875% ( 8) 00:23:57.012 25340.587 - 25465.417: 98.3625% ( 6) 00:23:57.012 25465.417 - 25590.248: 98.4000% ( 3) 00:23:57.012 27088.213 - 27213.044: 98.4250% ( 2) 00:23:57.012 27213.044 - 27337.874: 98.4625% ( 3) 00:23:57.012 27337.874 - 27462.705: 98.5125% ( 4) 00:23:57.012 27462.705 - 27587.535: 98.5500% ( 3) 00:23:57.012 27587.535 - 27712.366: 98.5875% ( 3) 00:23:57.012 27712.366 - 27837.196: 98.6375% ( 4) 00:23:57.012 27837.196 - 27962.027: 98.6750% ( 3) 00:23:57.012 27962.027 - 28086.857: 98.7250% ( 4) 00:23:57.012 28086.857 - 28211.688: 98.7625% ( 3) 00:23:57.012 28211.688 - 28336.518: 98.8000% ( 3) 00:23:57.012 28336.518 - 28461.349: 98.8500% ( 4) 00:23:57.012 28461.349 - 28586.179: 98.8875% ( 3) 00:23:57.012 28586.179 - 28711.010: 98.9375% ( 4) 00:23:57.012 28711.010 - 28835.840: 98.9750% ( 3) 00:23:57.012 28835.840 - 28960.670: 99.0250% ( 4) 00:23:57.012 28960.670 - 29085.501: 99.0625% ( 3) 00:23:57.012 29085.501 - 29210.331: 99.1125% ( 4) 00:23:57.012 29210.331 - 29335.162: 99.1625% ( 4) 00:23:57.012 29335.162 - 29459.992: 99.2000% ( 3) 00:23:57.012 38697.448 - 38947.109: 99.2250% ( 2) 00:23:57.012 38947.109 - 39196.770: 99.2875% ( 5) 00:23:57.012 39196.770 - 39446.430: 99.3875% ( 8) 00:23:57.012 39446.430 - 39696.091: 99.4750% ( 7) 00:23:57.012 39696.091 - 39945.752: 99.5750% ( 8) 00:23:57.012 39945.752 - 40195.413: 99.6750% ( 8) 00:23:57.012 40195.413 - 40445.074: 99.7750% ( 8) 00:23:57.012 40445.074 - 40694.735: 99.8750% ( 8) 00:23:57.012 40694.735 - 40944.396: 99.9750% ( 8) 00:23:57.012 40944.396 - 41194.057: 100.0000% ( 2) 00:23:57.012 00:23:57.012 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:23:57.012 ============================================================================== 00:23:57.012 Range in us Cumulative IO count 00:23:57.012 10922.667 - 10985.082: 0.0125% ( 1) 00:23:57.012 11047.497 - 11109.912: 0.0250% ( 1) 00:23:57.012 11109.912 - 11172.328: 0.1125% ( 7) 00:23:57.012 11172.328 - 11234.743: 0.3500% ( 19) 00:23:57.012 11234.743 - 11297.158: 0.5875% ( 19) 00:23:57.012 11297.158 - 11359.573: 0.8875% ( 24) 00:23:57.012 11359.573 - 11421.989: 1.6625% ( 62) 00:23:57.012 11421.989 - 11484.404: 2.2625% ( 48) 00:23:57.012 11484.404 - 11546.819: 2.8625% ( 48) 00:23:57.012 11546.819 - 11609.234: 3.4250% ( 45) 00:23:57.012 11609.234 - 11671.650: 3.9125% ( 39) 00:23:57.012 11671.650 - 11734.065: 4.6500% ( 59) 00:23:57.012 11734.065 - 11796.480: 5.7250% ( 86) 00:23:57.012 11796.480 - 11858.895: 6.4250% ( 56) 00:23:57.012 11858.895 - 11921.310: 6.9750% ( 44) 00:23:57.012 11921.310 - 11983.726: 7.7875% ( 65) 00:23:57.012 11983.726 - 12046.141: 8.5625% ( 62) 00:23:57.012 12046.141 - 12108.556: 9.3250% ( 61) 00:23:57.012 12108.556 - 12170.971: 10.1375% ( 65) 00:23:57.012 12170.971 - 12233.387: 10.9000% ( 61) 00:23:57.012 12233.387 - 12295.802: 11.7000% ( 64) 00:23:57.012 12295.802 - 12358.217: 12.4625% ( 61) 00:23:57.012 12358.217 - 12420.632: 13.1875% ( 58) 00:23:57.012 12420.632 - 12483.048: 13.9625% ( 62) 00:23:57.012 12483.048 - 12545.463: 14.8000% ( 67) 00:23:57.012 12545.463 - 12607.878: 15.9250% ( 90) 00:23:57.012 12607.878 - 12670.293: 17.1500% ( 98) 00:23:57.012 12670.293 - 12732.709: 18.4000% ( 100) 00:23:57.012 12732.709 - 12795.124: 19.5625% ( 93) 00:23:57.012 12795.124 - 12857.539: 20.6875% ( 90) 00:23:57.012 12857.539 - 12919.954: 21.9125% ( 98) 00:23:57.012 12919.954 - 12982.370: 23.1375% ( 98) 00:23:57.012 12982.370 - 13044.785: 24.3250% ( 95) 00:23:57.012 13044.785 - 13107.200: 25.9250% ( 128) 00:23:57.012 13107.200 - 13169.615: 27.3000% ( 110) 00:23:57.012 13169.615 - 13232.030: 28.7250% ( 114) 00:23:57.012 13232.030 - 13294.446: 30.1875% ( 117) 00:23:57.012 13294.446 - 13356.861: 31.7875% ( 128) 00:23:57.012 13356.861 - 13419.276: 33.1500% ( 109) 00:23:57.012 13419.276 - 13481.691: 34.6125% ( 117) 00:23:57.012 13481.691 - 13544.107: 35.8500% ( 99) 00:23:57.012 13544.107 - 13606.522: 37.2250% ( 110) 00:23:57.012 13606.522 - 13668.937: 38.6375% ( 113) 00:23:57.012 13668.937 - 13731.352: 39.9750% ( 107) 00:23:57.012 13731.352 - 13793.768: 41.6750% ( 136) 00:23:57.012 13793.768 - 13856.183: 43.1250% ( 116) 00:23:57.012 13856.183 - 13918.598: 44.3875% ( 101) 00:23:57.012 13918.598 - 13981.013: 45.6750% ( 103) 00:23:57.012 13981.013 - 14043.429: 47.2125% ( 123) 00:23:57.012 14043.429 - 14105.844: 48.7625% ( 124) 00:23:57.012 14105.844 - 14168.259: 50.0500% ( 103) 00:23:57.012 14168.259 - 14230.674: 51.1125% ( 85) 00:23:57.012 14230.674 - 14293.090: 52.2125% ( 88) 00:23:57.012 14293.090 - 14355.505: 53.2875% ( 86) 00:23:57.012 14355.505 - 14417.920: 54.2750% ( 79) 00:23:57.012 14417.920 - 14480.335: 55.2500% ( 78) 00:23:57.012 14480.335 - 14542.750: 56.1000% ( 68) 00:23:57.012 14542.750 - 14605.166: 57.0375% ( 75) 00:23:57.012 14605.166 - 14667.581: 57.7875% ( 60) 00:23:57.012 14667.581 - 14729.996: 58.5125% ( 58) 00:23:57.012 14729.996 - 14792.411: 59.1500% ( 51) 00:23:57.012 14792.411 - 14854.827: 59.6500% ( 40) 00:23:57.012 14854.827 - 14917.242: 60.1250% ( 38) 00:23:57.012 14917.242 - 14979.657: 60.6250% ( 40) 00:23:57.012 14979.657 - 15042.072: 61.1250% ( 40) 00:23:57.012 15042.072 - 15104.488: 61.5875% ( 37) 00:23:57.012 15104.488 - 15166.903: 62.0000% ( 33) 00:23:57.013 15166.903 - 15229.318: 62.3000% ( 24) 00:23:57.013 15229.318 - 15291.733: 62.6750% ( 30) 00:23:57.013 15291.733 - 15354.149: 63.1625% ( 39) 00:23:57.013 15354.149 - 15416.564: 63.8000% ( 51) 00:23:57.013 15416.564 - 15478.979: 64.2875% ( 39) 00:23:57.013 15478.979 - 15541.394: 64.7875% ( 40) 00:23:57.013 15541.394 - 15603.810: 65.1250% ( 27) 00:23:57.013 15603.810 - 15666.225: 65.5500% ( 34) 00:23:57.013 15666.225 - 15728.640: 65.9250% ( 30) 00:23:57.013 15728.640 - 15791.055: 66.4625% ( 43) 00:23:57.013 15791.055 - 15853.470: 67.0250% ( 45) 00:23:57.013 15853.470 - 15915.886: 67.2750% ( 20) 00:23:57.013 15915.886 - 15978.301: 67.5000% ( 18) 00:23:57.013 15978.301 - 16103.131: 67.8625% ( 29) 00:23:57.013 16103.131 - 16227.962: 68.1625% ( 24) 00:23:57.013 16227.962 - 16352.792: 68.6000% ( 35) 00:23:57.013 16352.792 - 16477.623: 69.1625% ( 45) 00:23:57.013 16477.623 - 16602.453: 69.6500% ( 39) 00:23:57.013 16602.453 - 16727.284: 70.3750% ( 58) 00:23:57.013 16727.284 - 16852.114: 71.2125% ( 67) 00:23:57.013 16852.114 - 16976.945: 72.3375% ( 90) 00:23:57.013 16976.945 - 17101.775: 72.9000% ( 45) 00:23:57.013 17101.775 - 17226.606: 73.4375% ( 43) 00:23:57.013 17226.606 - 17351.436: 73.9875% ( 44) 00:23:57.013 17351.436 - 17476.267: 74.5000% ( 41) 00:23:57.013 17476.267 - 17601.097: 75.1000% ( 48) 00:23:57.013 17601.097 - 17725.928: 75.6000% ( 40) 00:23:57.013 17725.928 - 17850.758: 76.0500% ( 36) 00:23:57.013 17850.758 - 17975.589: 76.3750% ( 26) 00:23:57.013 17975.589 - 18100.419: 76.6375% ( 21) 00:23:57.013 18100.419 - 18225.250: 76.8625% ( 18) 00:23:57.013 18225.250 - 18350.080: 77.1000% ( 19) 00:23:57.013 18350.080 - 18474.910: 77.4125% ( 25) 00:23:57.013 18474.910 - 18599.741: 77.5375% ( 10) 00:23:57.013 18599.741 - 18724.571: 77.6375% ( 8) 00:23:57.013 18724.571 - 18849.402: 77.7500% ( 9) 00:23:57.013 18849.402 - 18974.232: 77.8875% ( 11) 00:23:57.013 18974.232 - 19099.063: 78.0250% ( 11) 00:23:57.013 19099.063 - 19223.893: 78.1750% ( 12) 00:23:57.013 19223.893 - 19348.724: 78.2500% ( 6) 00:23:57.013 19348.724 - 19473.554: 78.3250% ( 6) 00:23:57.013 19473.554 - 19598.385: 78.4000% ( 6) 00:23:57.013 20347.368 - 20472.198: 78.4625% ( 5) 00:23:57.013 20472.198 - 20597.029: 78.5125% ( 4) 00:23:57.013 20597.029 - 20721.859: 78.6000% ( 7) 00:23:57.013 20721.859 - 20846.690: 78.6500% ( 4) 00:23:57.013 20846.690 - 20971.520: 78.6875% ( 3) 00:23:57.013 20971.520 - 21096.350: 78.7500% ( 5) 00:23:57.013 21096.350 - 21221.181: 78.8125% ( 5) 00:23:57.013 21221.181 - 21346.011: 78.8625% ( 4) 00:23:57.013 21346.011 - 21470.842: 78.9625% ( 8) 00:23:57.013 21470.842 - 21595.672: 79.1875% ( 18) 00:23:57.013 21595.672 - 21720.503: 79.4750% ( 23) 00:23:57.013 21720.503 - 21845.333: 79.9625% ( 39) 00:23:57.013 21845.333 - 21970.164: 80.5125% ( 44) 00:23:57.013 21970.164 - 22094.994: 81.2250% ( 57) 00:23:57.013 22094.994 - 22219.825: 82.1500% ( 74) 00:23:57.013 22219.825 - 22344.655: 83.0750% ( 74) 00:23:57.013 22344.655 - 22469.486: 84.0500% ( 78) 00:23:57.013 22469.486 - 22594.316: 84.9500% ( 72) 00:23:57.013 22594.316 - 22719.147: 85.9250% ( 78) 00:23:57.013 22719.147 - 22843.977: 86.9500% ( 82) 00:23:57.013 22843.977 - 22968.808: 87.9500% ( 80) 00:23:57.013 22968.808 - 23093.638: 89.3625% ( 113) 00:23:57.013 23093.638 - 23218.469: 90.5875% ( 98) 00:23:57.013 23218.469 - 23343.299: 91.2000% ( 49) 00:23:57.013 23343.299 - 23468.130: 91.7625% ( 45) 00:23:57.013 23468.130 - 23592.960: 92.5125% ( 60) 00:23:57.013 23592.960 - 23717.790: 93.0875% ( 46) 00:23:57.013 23717.790 - 23842.621: 93.6375% ( 44) 00:23:57.013 23842.621 - 23967.451: 94.3375% ( 56) 00:23:57.013 23967.451 - 24092.282: 94.9375% ( 48) 00:23:57.013 24092.282 - 24217.112: 95.4625% ( 42) 00:23:57.013 24217.112 - 24341.943: 95.9500% ( 39) 00:23:57.013 24341.943 - 24466.773: 96.4000% ( 36) 00:23:57.013 24466.773 - 24591.604: 96.8375% ( 35) 00:23:57.013 24591.604 - 24716.434: 97.1875% ( 28) 00:23:57.013 24716.434 - 24841.265: 97.6000% ( 33) 00:23:57.013 24841.265 - 24966.095: 97.8750% ( 22) 00:23:57.013 24966.095 - 25090.926: 98.0750% ( 16) 00:23:57.013 25090.926 - 25215.756: 98.2125% ( 11) 00:23:57.013 25215.756 - 25340.587: 98.3000% ( 7) 00:23:57.013 25340.587 - 25465.417: 98.7250% ( 34) 00:23:57.013 25465.417 - 25590.248: 98.8000% ( 6) 00:23:57.013 25590.248 - 25715.078: 98.8375% ( 3) 00:23:57.013 25715.078 - 25839.909: 98.8625% ( 2) 00:23:57.013 25839.909 - 25964.739: 98.9125% ( 4) 00:23:57.013 25964.739 - 26089.570: 98.9375% ( 2) 00:23:57.013 26089.570 - 26214.400: 98.9750% ( 3) 00:23:57.013 26214.400 - 26339.230: 99.0000% ( 2) 00:23:57.013 26339.230 - 26464.061: 99.0375% ( 3) 00:23:57.013 26464.061 - 26588.891: 99.0750% ( 3) 00:23:57.013 26588.891 - 26713.722: 99.1125% ( 3) 00:23:57.013 26713.722 - 26838.552: 99.1500% ( 3) 00:23:57.013 26838.552 - 26963.383: 99.2000% ( 4) 00:23:57.013 36450.499 - 36700.160: 99.2375% ( 3) 00:23:57.013 36700.160 - 36949.821: 99.3375% ( 8) 00:23:57.013 36949.821 - 37199.482: 99.4375% ( 8) 00:23:57.013 37199.482 - 37449.143: 99.5250% ( 7) 00:23:57.013 37449.143 - 37698.804: 99.6250% ( 8) 00:23:57.013 37698.804 - 37948.465: 99.7250% ( 8) 00:23:57.013 37948.465 - 38198.126: 99.8125% ( 7) 00:23:57.013 38198.126 - 38447.787: 99.9000% ( 7) 00:23:57.013 38447.787 - 38697.448: 100.0000% ( 8) 00:23:57.013 00:23:57.013 07:21:21 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:23:57.013 00:23:57.013 real 0m2.879s 00:23:57.013 user 0m2.372s 00:23:57.013 sys 0m0.383s 00:23:57.013 07:21:21 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.013 07:21:21 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.013 ************************************ 00:23:57.013 END TEST nvme_perf 00:23:57.013 ************************************ 00:23:57.013 07:21:21 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:23:57.013 07:21:21 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:57.013 07:21:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.013 07:21:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:57.013 ************************************ 00:23:57.013 START TEST nvme_hello_world 00:23:57.013 ************************************ 00:23:57.013 07:21:21 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:23:57.579 Initializing NVMe Controllers 00:23:57.579 Attached to 0000:00:10.0 00:23:57.579 Namespace ID: 1 size: 6GB 00:23:57.579 Attached to 0000:00:11.0 00:23:57.579 Namespace ID: 1 size: 5GB 00:23:57.579 Attached to 0000:00:13.0 00:23:57.579 Namespace ID: 1 size: 1GB 00:23:57.579 Attached to 0000:00:12.0 00:23:57.579 Namespace ID: 1 size: 4GB 00:23:57.579 Namespace ID: 2 size: 4GB 00:23:57.579 Namespace ID: 3 size: 4GB 00:23:57.579 Initialization complete. 00:23:57.579 INFO: using host memory buffer for IO 00:23:57.579 Hello world! 00:23:57.579 INFO: using host memory buffer for IO 00:23:57.579 Hello world! 00:23:57.579 INFO: using host memory buffer for IO 00:23:57.579 Hello world! 00:23:57.579 INFO: using host memory buffer for IO 00:23:57.579 Hello world! 00:23:57.579 INFO: using host memory buffer for IO 00:23:57.579 Hello world! 00:23:57.579 INFO: using host memory buffer for IO 00:23:57.579 Hello world! 00:23:57.579 00:23:57.579 real 0m0.512s 00:23:57.579 user 0m0.253s 00:23:57.579 sys 0m0.187s 00:23:57.579 07:21:21 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.579 07:21:21 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:57.579 ************************************ 00:23:57.579 END TEST nvme_hello_world 00:23:57.579 ************************************ 00:23:57.579 07:21:21 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:23:57.579 07:21:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:57.579 07:21:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.579 07:21:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:57.579 ************************************ 00:23:57.579 START TEST nvme_sgl 00:23:57.579 ************************************ 00:23:57.579 07:21:21 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:23:57.921 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:23:57.921 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:23:57.921 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:23:57.921 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:23:57.921 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:23:57.921 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:23:57.921 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:23:57.921 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:23:57.921 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:23:58.179 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:23:58.179 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:23:58.179 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:23:58.179 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:23:58.179 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:23:58.179 NVMe Readv/Writev Request test 00:23:58.179 Attached to 0000:00:10.0 00:23:58.179 Attached to 0000:00:11.0 00:23:58.179 Attached to 0000:00:13.0 00:23:58.179 Attached to 0000:00:12.0 00:23:58.179 0000:00:10.0: build_io_request_2 test passed 00:23:58.179 0000:00:10.0: build_io_request_4 test passed 00:23:58.179 0000:00:10.0: build_io_request_5 test passed 00:23:58.179 0000:00:10.0: build_io_request_6 test passed 00:23:58.179 0000:00:10.0: build_io_request_7 test passed 00:23:58.179 0000:00:10.0: build_io_request_10 test passed 00:23:58.179 0000:00:11.0: build_io_request_2 test passed 00:23:58.179 0000:00:11.0: build_io_request_4 test passed 00:23:58.179 0000:00:11.0: build_io_request_5 test passed 00:23:58.179 0000:00:11.0: build_io_request_6 test passed 00:23:58.179 0000:00:11.0: build_io_request_7 test passed 00:23:58.180 0000:00:11.0: build_io_request_10 test passed 00:23:58.180 Cleaning up... 00:23:58.180 00:23:58.180 real 0m0.494s 00:23:58.180 user 0m0.258s 00:23:58.180 sys 0m0.194s 00:23:58.180 07:21:22 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.180 07:21:22 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:23:58.180 ************************************ 00:23:58.180 END TEST nvme_sgl 00:23:58.180 ************************************ 00:23:58.180 07:21:22 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:23:58.180 07:21:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.180 07:21:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.180 07:21:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:58.180 ************************************ 00:23:58.180 START TEST nvme_e2edp 00:23:58.180 ************************************ 00:23:58.180 07:21:22 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:23:58.438 NVMe Write/Read with End-to-End data protection test 00:23:58.438 Attached to 0000:00:10.0 00:23:58.438 Attached to 0000:00:11.0 00:23:58.438 Attached to 0000:00:13.0 00:23:58.438 Attached to 0000:00:12.0 00:23:58.438 Cleaning up... 00:23:58.438 00:23:58.438 real 0m0.345s 00:23:58.438 user 0m0.125s 00:23:58.438 sys 0m0.174s 00:23:58.438 07:21:22 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.438 07:21:22 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:23:58.438 ************************************ 00:23:58.438 END TEST nvme_e2edp 00:23:58.438 ************************************ 00:23:58.697 07:21:22 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:23:58.697 07:21:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.697 07:21:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.697 07:21:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:58.697 ************************************ 00:23:58.697 START TEST nvme_reserve 00:23:58.697 ************************************ 00:23:58.697 07:21:22 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:23:58.955 ===================================================== 00:23:58.955 NVMe Controller at PCI bus 0, device 16, function 0 00:23:58.955 ===================================================== 00:23:58.955 Reservations: Not Supported 00:23:58.955 ===================================================== 00:23:58.955 NVMe Controller at PCI bus 0, device 17, function 0 00:23:58.955 ===================================================== 00:23:58.955 Reservations: Not Supported 00:23:58.955 ===================================================== 00:23:58.955 NVMe Controller at PCI bus 0, device 19, function 0 00:23:58.955 ===================================================== 00:23:58.955 Reservations: Not Supported 00:23:58.955 ===================================================== 00:23:58.955 NVMe Controller at PCI bus 0, device 18, function 0 00:23:58.955 ===================================================== 00:23:58.955 Reservations: Not Supported 00:23:58.955 Reservation test passed 00:23:58.955 00:23:58.955 real 0m0.325s 00:23:58.955 user 0m0.120s 00:23:58.955 sys 0m0.160s 00:23:58.955 07:21:22 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.955 07:21:22 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:23:58.955 ************************************ 00:23:58.955 END TEST nvme_reserve 00:23:58.955 ************************************ 00:23:58.955 07:21:23 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:23:58.955 07:21:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.955 07:21:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.955 07:21:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:58.955 ************************************ 00:23:58.955 START TEST nvme_err_injection 00:23:58.955 ************************************ 00:23:58.955 07:21:23 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:23:59.213 NVMe Error Injection test 00:23:59.213 Attached to 0000:00:10.0 00:23:59.213 Attached to 0000:00:11.0 00:23:59.213 Attached to 0000:00:13.0 00:23:59.213 Attached to 0000:00:12.0 00:23:59.213 0000:00:13.0: get features failed as expected 00:23:59.213 0000:00:12.0: get features failed as expected 00:23:59.213 0000:00:10.0: get features failed as expected 00:23:59.213 0000:00:11.0: get features failed as expected 00:23:59.213 0000:00:10.0: get features successfully as expected 00:23:59.213 0000:00:11.0: get features successfully as expected 00:23:59.213 0000:00:13.0: get features successfully as expected 00:23:59.213 0000:00:12.0: get features successfully as expected 00:23:59.213 0000:00:10.0: read failed as expected 00:23:59.213 0000:00:11.0: read failed as expected 00:23:59.213 0000:00:13.0: read failed as expected 00:23:59.213 0000:00:12.0: read failed as expected 00:23:59.213 0000:00:10.0: read successfully as expected 00:23:59.213 0000:00:11.0: read successfully as expected 00:23:59.213 0000:00:13.0: read successfully as expected 00:23:59.213 0000:00:12.0: read successfully as expected 00:23:59.213 Cleaning up... 00:23:59.213 00:23:59.213 real 0m0.379s 00:23:59.213 user 0m0.154s 00:23:59.213 sys 0m0.179s 00:23:59.213 07:21:23 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.213 ************************************ 00:23:59.213 END TEST nvme_err_injection 00:23:59.213 07:21:23 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:23:59.213 ************************************ 00:23:59.472 07:21:23 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:23:59.472 07:21:23 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:23:59.472 07:21:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.472 07:21:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:59.472 ************************************ 00:23:59.472 START TEST nvme_overhead 00:23:59.472 ************************************ 00:23:59.472 07:21:23 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:24:00.848 Initializing NVMe Controllers 00:24:00.848 Attached to 0000:00:10.0 00:24:00.848 Attached to 0000:00:11.0 00:24:00.848 Attached to 0000:00:13.0 00:24:00.848 Attached to 0000:00:12.0 00:24:00.848 Initialization complete. Launching workers. 00:24:00.848 submit (in ns) avg, min, max = 17163.7, 13189.5, 128385.7 00:24:00.848 complete (in ns) avg, min, max = 12069.2, 8667.6, 896405.7 00:24:00.848 00:24:00.848 Submit histogram 00:24:00.848 ================ 00:24:00.848 Range in us Cumulative Count 00:24:00.848 13.166 - 13.227: 0.0190% ( 2) 00:24:00.848 13.227 - 13.288: 0.0380% ( 2) 00:24:00.848 13.288 - 13.349: 0.1994% ( 17) 00:24:00.848 13.349 - 13.410: 0.3704% ( 18) 00:24:00.848 13.410 - 13.470: 0.4559% ( 9) 00:24:00.848 13.470 - 13.531: 0.5888% ( 14) 00:24:00.848 13.531 - 13.592: 0.7313% ( 15) 00:24:00.848 13.592 - 13.653: 0.8263% ( 10) 00:24:00.848 13.653 - 13.714: 0.9118% ( 9) 00:24:00.848 13.714 - 13.775: 0.9403% ( 3) 00:24:00.848 13.775 - 13.836: 0.9688% ( 3) 00:24:00.848 13.836 - 13.897: 0.9783% ( 1) 00:24:00.848 13.897 - 13.958: 1.0732% ( 10) 00:24:00.848 13.958 - 14.019: 1.2727% ( 21) 00:24:00.848 14.019 - 14.080: 1.6621% ( 41) 00:24:00.848 14.080 - 14.141: 2.7068% ( 110) 00:24:00.848 14.141 - 14.202: 3.9985% ( 136) 00:24:00.848 14.202 - 14.263: 5.0717% ( 113) 00:24:00.848 14.263 - 14.324: 6.5153% ( 152) 00:24:00.848 14.324 - 14.385: 8.2249% ( 180) 00:24:00.848 14.385 - 14.446: 9.7540% ( 161) 00:24:00.848 14.446 - 14.507: 11.2546% ( 158) 00:24:00.848 14.507 - 14.568: 12.6698% ( 149) 00:24:00.848 14.568 - 14.629: 14.0469% ( 145) 00:24:00.849 14.629 - 14.690: 15.7755% ( 182) 00:24:00.849 14.690 - 14.750: 17.5610% ( 188) 00:24:00.849 14.750 - 14.811: 19.9829% ( 255) 00:24:00.849 14.811 - 14.872: 22.7942% ( 296) 00:24:00.849 14.872 - 14.933: 25.5010% ( 285) 00:24:00.849 14.933 - 14.994: 28.0938% ( 273) 00:24:00.849 14.994 - 15.055: 30.2593% ( 228) 00:24:00.849 15.055 - 15.116: 32.1208% ( 196) 00:24:00.849 15.116 - 15.177: 33.6309% ( 159) 00:24:00.849 15.177 - 15.238: 34.7896% ( 122) 00:24:00.849 15.238 - 15.299: 35.9483% ( 122) 00:24:00.849 15.299 - 15.360: 36.8696% ( 97) 00:24:00.849 15.360 - 15.421: 37.6294% ( 80) 00:24:00.849 15.421 - 15.482: 38.2657% ( 67) 00:24:00.849 15.482 - 15.543: 38.8641% ( 63) 00:24:00.849 15.543 - 15.604: 39.3770% ( 54) 00:24:00.849 15.604 - 15.726: 40.5072% ( 119) 00:24:00.849 15.726 - 15.848: 42.2737% ( 186) 00:24:00.849 15.848 - 15.970: 44.8286% ( 269) 00:24:00.849 15.970 - 16.091: 46.6426% ( 191) 00:24:00.849 16.091 - 16.213: 48.1527% ( 159) 00:24:00.849 16.213 - 16.335: 50.0522% ( 200) 00:24:00.849 16.335 - 16.457: 52.8540% ( 295) 00:24:00.849 16.457 - 16.579: 55.9312% ( 324) 00:24:00.849 16.579 - 16.701: 58.1442% ( 233) 00:24:00.849 16.701 - 16.823: 59.7018% ( 164) 00:24:00.849 16.823 - 16.945: 60.6515% ( 100) 00:24:00.849 16.945 - 17.067: 61.2689% ( 65) 00:24:00.849 17.067 - 17.189: 61.6393% ( 39) 00:24:00.849 17.189 - 17.310: 61.9147% ( 29) 00:24:00.849 17.310 - 17.432: 62.2091% ( 31) 00:24:00.849 17.432 - 17.554: 62.3991% ( 20) 00:24:00.849 17.554 - 17.676: 62.5890% ( 20) 00:24:00.849 17.676 - 17.798: 62.7790% ( 20) 00:24:00.849 17.798 - 17.920: 62.8455% ( 7) 00:24:00.849 17.920 - 18.042: 62.9215% ( 8) 00:24:00.849 18.042 - 18.164: 63.0164% ( 10) 00:24:00.849 18.164 - 18.286: 63.1684% ( 16) 00:24:00.849 18.286 - 18.408: 63.3014% ( 14) 00:24:00.849 18.408 - 18.530: 63.4058% ( 11) 00:24:00.849 18.530 - 18.651: 63.6718% ( 28) 00:24:00.849 18.651 - 18.773: 66.3881% ( 286) 00:24:00.849 18.773 - 18.895: 74.0811% ( 810) 00:24:00.849 18.895 - 19.017: 81.2138% ( 751) 00:24:00.849 19.017 - 19.139: 85.4307% ( 444) 00:24:00.849 19.139 - 19.261: 88.1755% ( 289) 00:24:00.849 19.261 - 19.383: 89.5907% ( 149) 00:24:00.849 19.383 - 19.505: 90.7873% ( 126) 00:24:00.849 19.505 - 19.627: 91.7466% ( 101) 00:24:00.849 19.627 - 19.749: 92.4304% ( 72) 00:24:00.849 19.749 - 19.870: 92.9148% ( 51) 00:24:00.849 19.870 - 19.992: 93.2377% ( 34) 00:24:00.849 19.992 - 20.114: 93.5701% ( 35) 00:24:00.849 20.114 - 20.236: 93.7601% ( 20) 00:24:00.849 20.236 - 20.358: 93.8931% ( 14) 00:24:00.849 20.358 - 20.480: 94.0450% ( 16) 00:24:00.849 20.480 - 20.602: 94.1400% ( 10) 00:24:00.849 20.602 - 20.724: 94.2160% ( 8) 00:24:00.849 20.724 - 20.846: 94.3584% ( 15) 00:24:00.849 20.846 - 20.968: 94.4344% ( 8) 00:24:00.849 20.968 - 21.090: 94.5484% ( 12) 00:24:00.849 21.090 - 21.211: 94.6244% ( 8) 00:24:00.849 21.211 - 21.333: 94.6529% ( 3) 00:24:00.849 21.333 - 21.455: 94.7478% ( 10) 00:24:00.849 21.455 - 21.577: 94.8143% ( 7) 00:24:00.849 21.577 - 21.699: 94.8618% ( 5) 00:24:00.849 21.699 - 21.821: 94.8903% ( 3) 00:24:00.849 21.821 - 21.943: 94.9663% ( 8) 00:24:00.849 21.943 - 22.065: 95.0518% ( 9) 00:24:00.849 22.065 - 22.187: 95.0992% ( 5) 00:24:00.849 22.187 - 22.309: 95.1847% ( 9) 00:24:00.849 22.309 - 22.430: 95.3177% ( 14) 00:24:00.849 22.430 - 22.552: 95.4412% ( 13) 00:24:00.849 22.552 - 22.674: 95.5171% ( 8) 00:24:00.849 22.674 - 22.796: 95.5741% ( 6) 00:24:00.849 22.796 - 22.918: 95.6501% ( 8) 00:24:00.849 22.918 - 23.040: 95.7166% ( 7) 00:24:00.849 23.040 - 23.162: 95.8021% ( 9) 00:24:00.849 23.162 - 23.284: 95.8211% ( 2) 00:24:00.849 23.284 - 23.406: 95.8781% ( 6) 00:24:00.849 23.406 - 23.528: 95.9635% ( 9) 00:24:00.849 23.528 - 23.650: 96.0015% ( 4) 00:24:00.849 23.650 - 23.771: 96.0300% ( 3) 00:24:00.849 23.893 - 24.015: 96.0585% ( 3) 00:24:00.849 24.015 - 24.137: 96.1060% ( 5) 00:24:00.849 24.137 - 24.259: 96.1630% ( 6) 00:24:00.849 24.259 - 24.381: 96.2864% ( 13) 00:24:00.849 24.381 - 24.503: 96.3814% ( 10) 00:24:00.849 24.503 - 24.625: 96.5334% ( 16) 00:24:00.849 24.625 - 24.747: 96.6758% ( 15) 00:24:00.849 24.747 - 24.869: 96.7898% ( 12) 00:24:00.849 24.869 - 24.990: 96.8943% ( 11) 00:24:00.849 24.990 - 25.112: 96.9703% ( 8) 00:24:00.849 25.112 - 25.234: 97.0273% ( 6) 00:24:00.849 25.234 - 25.356: 97.0558% ( 3) 00:24:00.849 25.356 - 25.478: 97.1032% ( 5) 00:24:00.849 25.478 - 25.600: 97.1507% ( 5) 00:24:00.849 25.600 - 25.722: 97.1792% ( 3) 00:24:00.849 25.722 - 25.844: 97.2647% ( 9) 00:24:00.849 25.844 - 25.966: 97.3312% ( 7) 00:24:00.849 25.966 - 26.088: 97.3882% ( 6) 00:24:00.849 26.088 - 26.210: 97.4641% ( 8) 00:24:00.849 26.210 - 26.331: 97.5401% ( 8) 00:24:00.849 26.331 - 26.453: 97.6161% ( 8) 00:24:00.849 26.453 - 26.575: 97.6826% ( 7) 00:24:00.849 26.575 - 26.697: 97.7491% ( 7) 00:24:00.849 26.697 - 26.819: 97.7871% ( 4) 00:24:00.849 26.819 - 26.941: 97.8440% ( 6) 00:24:00.849 26.941 - 27.063: 97.9105% ( 7) 00:24:00.849 27.063 - 27.185: 97.9675% ( 6) 00:24:00.849 27.185 - 27.307: 98.0720% ( 11) 00:24:00.849 27.307 - 27.429: 98.1480% ( 8) 00:24:00.849 27.429 - 27.550: 98.2145% ( 7) 00:24:00.849 27.550 - 27.672: 98.3569% ( 15) 00:24:00.849 27.672 - 27.794: 98.4234% ( 7) 00:24:00.849 27.794 - 27.916: 98.4899% ( 7) 00:24:00.849 27.916 - 28.038: 98.5659% ( 8) 00:24:00.849 28.038 - 28.160: 98.6134% ( 5) 00:24:00.849 28.160 - 28.282: 98.6703% ( 6) 00:24:00.849 28.282 - 28.404: 98.7653% ( 10) 00:24:00.849 28.404 - 28.526: 98.8318% ( 7) 00:24:00.849 28.526 - 28.648: 98.8698% ( 4) 00:24:00.849 28.648 - 28.770: 98.9078% ( 4) 00:24:00.849 28.770 - 28.891: 98.9648% ( 6) 00:24:00.849 28.891 - 29.013: 98.9838% ( 2) 00:24:00.849 29.013 - 29.135: 99.0123% ( 3) 00:24:00.849 29.135 - 29.257: 99.0502% ( 4) 00:24:00.849 29.257 - 29.379: 99.0597% ( 1) 00:24:00.849 29.379 - 29.501: 99.0977% ( 4) 00:24:00.849 29.501 - 29.623: 99.1072% ( 1) 00:24:00.849 29.623 - 29.745: 99.1167% ( 1) 00:24:00.849 29.745 - 29.867: 99.1452% ( 3) 00:24:00.849 29.867 - 29.989: 99.1832% ( 4) 00:24:00.849 29.989 - 30.110: 99.2117% ( 3) 00:24:00.849 30.110 - 30.232: 99.2497% ( 4) 00:24:00.849 30.232 - 30.354: 99.3067% ( 6) 00:24:00.849 30.354 - 30.476: 99.3447% ( 4) 00:24:00.849 30.720 - 30.842: 99.3542% ( 1) 00:24:00.849 30.964 - 31.086: 99.3637% ( 1) 00:24:00.849 31.208 - 31.451: 99.4017% ( 4) 00:24:00.849 31.451 - 31.695: 99.4396% ( 4) 00:24:00.849 31.695 - 31.939: 99.4776% ( 4) 00:24:00.849 31.939 - 32.183: 99.5156% ( 4) 00:24:00.849 32.183 - 32.427: 99.5251% ( 1) 00:24:00.849 32.427 - 32.670: 99.5441% ( 2) 00:24:00.849 32.670 - 32.914: 99.5536% ( 1) 00:24:00.849 32.914 - 33.158: 99.5726% ( 2) 00:24:00.849 33.158 - 33.402: 99.5821% ( 1) 00:24:00.849 33.402 - 33.646: 99.6391% ( 6) 00:24:00.849 34.133 - 34.377: 99.6581% ( 2) 00:24:00.849 34.377 - 34.621: 99.6676% ( 1) 00:24:00.849 34.621 - 34.865: 99.6771% ( 1) 00:24:00.849 34.865 - 35.109: 99.6866% ( 1) 00:24:00.849 35.109 - 35.352: 99.6961% ( 1) 00:24:00.849 35.352 - 35.596: 99.7056% ( 1) 00:24:00.849 35.840 - 36.084: 99.7341% ( 3) 00:24:00.849 36.084 - 36.328: 99.7626% ( 3) 00:24:00.849 36.815 - 37.059: 99.7816% ( 2) 00:24:00.849 37.790 - 38.034: 99.8006% ( 2) 00:24:00.849 38.034 - 38.278: 99.8100% ( 1) 00:24:00.849 38.522 - 38.766: 99.8195% ( 1) 00:24:00.849 38.766 - 39.010: 99.8290% ( 1) 00:24:00.849 39.253 - 39.497: 99.8385% ( 1) 00:24:00.849 39.497 - 39.741: 99.8480% ( 1) 00:24:00.849 39.741 - 39.985: 99.8575% ( 1) 00:24:00.849 39.985 - 40.229: 99.8670% ( 1) 00:24:00.849 40.472 - 40.716: 99.8860% ( 2) 00:24:00.849 42.423 - 42.667: 99.8955% ( 1) 00:24:00.849 43.398 - 43.642: 99.9050% ( 1) 00:24:00.849 45.349 - 45.592: 99.9145% ( 1) 00:24:00.849 45.836 - 46.080: 99.9240% ( 1) 00:24:00.849 46.080 - 46.324: 99.9335% ( 1) 00:24:00.849 47.543 - 47.787: 99.9525% ( 2) 00:24:00.850 49.006 - 49.250: 99.9620% ( 1) 00:24:00.850 51.931 - 52.175: 99.9715% ( 1) 00:24:00.850 52.907 - 53.150: 99.9810% ( 1) 00:24:00.850 53.882 - 54.126: 99.9905% ( 1) 00:24:00.850 127.756 - 128.731: 100.0000% ( 1) 00:24:00.850 00:24:00.850 Complete histogram 00:24:00.850 ================== 00:24:00.850 Range in us Cumulative Count 00:24:00.850 8.655 - 8.716: 0.1330% ( 14) 00:24:00.850 8.716 - 8.777: 0.5699% ( 46) 00:24:00.850 8.777 - 8.838: 0.7883% ( 23) 00:24:00.850 8.838 - 8.899: 0.9118% ( 13) 00:24:00.850 8.899 - 8.960: 1.0257% ( 12) 00:24:00.850 8.960 - 9.021: 1.1207% ( 10) 00:24:00.850 9.021 - 9.082: 1.1777% ( 6) 00:24:00.850 9.143 - 9.204: 1.1967% ( 2) 00:24:00.850 9.204 - 9.265: 1.6621% ( 49) 00:24:00.850 9.265 - 9.326: 4.5683% ( 306) 00:24:00.850 9.326 - 9.387: 8.9942% ( 466) 00:24:00.850 9.387 - 9.448: 11.1407% ( 226) 00:24:00.850 9.448 - 9.509: 12.5178% ( 145) 00:24:00.850 9.509 - 9.570: 13.9994% ( 156) 00:24:00.850 9.570 - 9.630: 17.1526% ( 332) 00:24:00.850 9.630 - 9.691: 22.2433% ( 536) 00:24:00.850 9.691 - 9.752: 26.7927% ( 479) 00:24:00.850 9.752 - 9.813: 29.8319% ( 320) 00:24:00.850 9.813 - 9.874: 32.3013% ( 260) 00:24:00.850 9.874 - 9.935: 34.2863% ( 209) 00:24:00.850 9.935 - 9.996: 35.8154% ( 161) 00:24:00.850 9.996 - 10.057: 36.8506% ( 109) 00:24:00.850 10.057 - 10.118: 37.7909% ( 99) 00:24:00.850 10.118 - 10.179: 38.6551% ( 91) 00:24:00.850 10.179 - 10.240: 39.4624% ( 85) 00:24:00.850 10.240 - 10.301: 40.1653% ( 74) 00:24:00.850 10.301 - 10.362: 40.7921% ( 66) 00:24:00.850 10.362 - 10.423: 41.2290% ( 46) 00:24:00.850 10.423 - 10.484: 41.4474% ( 23) 00:24:00.850 10.484 - 10.545: 41.6849% ( 25) 00:24:00.850 10.545 - 10.606: 41.8748% ( 20) 00:24:00.850 10.606 - 10.667: 42.1028% ( 24) 00:24:00.850 10.667 - 10.728: 42.3212% ( 23) 00:24:00.850 10.728 - 10.789: 42.5112% ( 20) 00:24:00.850 10.789 - 10.850: 42.7011% ( 20) 00:24:00.850 10.850 - 10.910: 42.8531% ( 16) 00:24:00.850 10.910 - 10.971: 42.9765% ( 13) 00:24:00.850 10.971 - 11.032: 43.0240% ( 5) 00:24:00.850 11.032 - 11.093: 43.0905% ( 7) 00:24:00.850 11.093 - 11.154: 43.2235% ( 14) 00:24:00.850 11.154 - 11.215: 43.3185% ( 10) 00:24:00.850 11.215 - 11.276: 43.5274% ( 22) 00:24:00.850 11.276 - 11.337: 43.8978% ( 39) 00:24:00.850 11.337 - 11.398: 44.1257% ( 24) 00:24:00.850 11.398 - 11.459: 44.3252% ( 21) 00:24:00.850 11.459 - 11.520: 44.4297% ( 11) 00:24:00.850 11.520 - 11.581: 44.5721% ( 15) 00:24:00.850 11.581 - 11.642: 44.6386% ( 7) 00:24:00.850 11.642 - 11.703: 44.7051% ( 7) 00:24:00.850 11.703 - 11.764: 44.7526% ( 5) 00:24:00.850 11.764 - 11.825: 44.7716% ( 2) 00:24:00.850 11.825 - 11.886: 44.8001% ( 3) 00:24:00.850 11.886 - 11.947: 44.8286% ( 3) 00:24:00.850 11.947 - 12.008: 44.8476% ( 2) 00:24:00.850 12.008 - 12.069: 44.8666% ( 2) 00:24:00.850 12.069 - 12.130: 44.8761% ( 1) 00:24:00.850 12.130 - 12.190: 44.8951% ( 2) 00:24:00.850 12.190 - 12.251: 44.9140% ( 2) 00:24:00.850 12.251 - 12.312: 44.9235% ( 1) 00:24:00.850 12.373 - 12.434: 44.9425% ( 2) 00:24:00.850 12.434 - 12.495: 44.9520% ( 1) 00:24:00.850 12.495 - 12.556: 44.9805% ( 3) 00:24:00.850 12.556 - 12.617: 45.0850% ( 11) 00:24:00.850 12.617 - 12.678: 47.4879% ( 253) 00:24:00.850 12.678 - 12.739: 54.0792% ( 694) 00:24:00.850 12.739 - 12.800: 61.8007% ( 813) 00:24:00.850 12.800 - 12.861: 66.3216% ( 476) 00:24:00.850 12.861 - 12.922: 69.1044% ( 293) 00:24:00.850 12.922 - 12.983: 70.8899% ( 188) 00:24:00.850 12.983 - 13.044: 72.5045% ( 170) 00:24:00.850 13.044 - 13.105: 74.1096% ( 169) 00:24:00.850 13.105 - 13.166: 75.3158% ( 127) 00:24:00.850 13.166 - 13.227: 76.1326% ( 86) 00:24:00.850 13.227 - 13.288: 76.9114% ( 82) 00:24:00.850 13.288 - 13.349: 77.6902% ( 82) 00:24:00.850 13.349 - 13.410: 78.5545% ( 91) 00:24:00.850 13.410 - 13.470: 79.5327% ( 103) 00:24:00.850 13.470 - 13.531: 80.7674% ( 130) 00:24:00.850 13.531 - 13.592: 82.1730% ( 148) 00:24:00.850 13.592 - 13.653: 83.3602% ( 125) 00:24:00.850 13.653 - 13.714: 84.6044% ( 131) 00:24:00.850 13.714 - 13.775: 85.9626% ( 143) 00:24:00.850 13.775 - 13.836: 87.2258% ( 133) 00:24:00.850 13.836 - 13.897: 88.4035% ( 124) 00:24:00.850 13.897 - 13.958: 89.3532% ( 100) 00:24:00.850 13.958 - 14.019: 90.2460% ( 94) 00:24:00.850 14.019 - 14.080: 91.2052% ( 101) 00:24:00.850 14.080 - 14.141: 92.0600% ( 90) 00:24:00.850 14.141 - 14.202: 92.8293% ( 81) 00:24:00.850 14.202 - 14.263: 93.5511% ( 76) 00:24:00.850 14.263 - 14.324: 94.0450% ( 52) 00:24:00.850 14.324 - 14.385: 94.5104% ( 49) 00:24:00.850 14.385 - 14.446: 94.8808% ( 39) 00:24:00.850 14.446 - 14.507: 95.1182% ( 25) 00:24:00.850 14.507 - 14.568: 95.3272% ( 22) 00:24:00.850 14.568 - 14.629: 95.4792% ( 16) 00:24:00.850 14.629 - 14.690: 95.5551% ( 8) 00:24:00.850 14.690 - 14.750: 95.6976% ( 15) 00:24:00.850 14.750 - 14.811: 95.7641% ( 7) 00:24:00.850 14.811 - 14.872: 95.8781% ( 12) 00:24:00.850 14.872 - 14.933: 95.9920% ( 12) 00:24:00.850 14.933 - 14.994: 96.0490% ( 6) 00:24:00.850 14.994 - 15.055: 96.1155% ( 7) 00:24:00.850 15.055 - 15.116: 96.1535% ( 4) 00:24:00.850 15.116 - 15.177: 96.1915% ( 4) 00:24:00.850 15.177 - 15.238: 96.2010% ( 1) 00:24:00.850 15.238 - 15.299: 96.2580% ( 6) 00:24:00.850 15.299 - 15.360: 96.3149% ( 6) 00:24:00.850 15.360 - 15.421: 96.3244% ( 1) 00:24:00.850 15.421 - 15.482: 96.3434% ( 2) 00:24:00.850 15.482 - 15.543: 96.3624% ( 2) 00:24:00.850 15.543 - 15.604: 96.3909% ( 3) 00:24:00.850 15.604 - 15.726: 96.4574% ( 7) 00:24:00.850 15.726 - 15.848: 96.4954% ( 4) 00:24:00.850 15.848 - 15.970: 96.5429% ( 5) 00:24:00.850 15.970 - 16.091: 96.5714% ( 3) 00:24:00.850 16.091 - 16.213: 96.6189% ( 5) 00:24:00.850 16.213 - 16.335: 96.6569% ( 4) 00:24:00.850 16.335 - 16.457: 96.7328% ( 8) 00:24:00.850 16.457 - 16.579: 96.7708% ( 4) 00:24:00.850 16.579 - 16.701: 96.7993% ( 3) 00:24:00.850 16.701 - 16.823: 96.8278% ( 3) 00:24:00.850 16.823 - 16.945: 96.8943% ( 7) 00:24:00.850 16.945 - 17.067: 96.9228% ( 3) 00:24:00.850 17.067 - 17.189: 96.9893% ( 7) 00:24:00.850 17.189 - 17.310: 97.0273% ( 4) 00:24:00.850 17.310 - 17.432: 97.0652% ( 4) 00:24:00.850 17.432 - 17.554: 97.0842% ( 2) 00:24:00.850 17.554 - 17.676: 97.1507% ( 7) 00:24:00.850 17.676 - 17.798: 97.2077% ( 6) 00:24:00.850 17.798 - 17.920: 97.2552% ( 5) 00:24:00.850 17.920 - 18.042: 97.3122% ( 6) 00:24:00.850 18.042 - 18.164: 97.3312% ( 2) 00:24:00.850 18.164 - 18.286: 97.3502% ( 2) 00:24:00.850 18.286 - 18.408: 97.3692% ( 2) 00:24:00.850 18.408 - 18.530: 97.4072% ( 4) 00:24:00.850 18.530 - 18.651: 97.4736% ( 7) 00:24:00.850 18.651 - 18.773: 97.4831% ( 1) 00:24:00.850 18.773 - 18.895: 97.5306% ( 5) 00:24:00.850 18.895 - 19.017: 97.5781% ( 5) 00:24:00.850 19.017 - 19.139: 97.6256% ( 5) 00:24:00.850 19.139 - 19.261: 97.6351% ( 1) 00:24:00.850 19.261 - 19.383: 97.6731% ( 4) 00:24:00.850 19.383 - 19.505: 97.7016% ( 3) 00:24:00.850 19.505 - 19.627: 97.7681% ( 7) 00:24:00.850 19.627 - 19.749: 97.8346% ( 7) 00:24:00.850 19.749 - 19.870: 97.8440% ( 1) 00:24:00.850 19.870 - 19.992: 97.8725% ( 3) 00:24:00.850 19.992 - 20.114: 97.8820% ( 1) 00:24:00.850 20.114 - 20.236: 97.8915% ( 1) 00:24:00.850 20.236 - 20.358: 97.9010% ( 1) 00:24:00.850 20.480 - 20.602: 97.9485% ( 5) 00:24:00.850 20.602 - 20.724: 97.9675% ( 2) 00:24:00.850 20.724 - 20.846: 98.0245% ( 6) 00:24:00.850 20.846 - 20.968: 98.1100% ( 9) 00:24:00.850 20.968 - 21.090: 98.1575% ( 5) 00:24:00.850 21.090 - 21.211: 98.2050% ( 5) 00:24:00.850 21.211 - 21.333: 98.2999% ( 10) 00:24:00.850 21.333 - 21.455: 98.3664% ( 7) 00:24:00.850 21.455 - 21.577: 98.4519% ( 9) 00:24:00.850 21.577 - 21.699: 98.4899% ( 4) 00:24:00.850 21.699 - 21.821: 98.5469% ( 6) 00:24:00.850 21.821 - 21.943: 98.5849% ( 4) 00:24:00.850 21.943 - 22.065: 98.6798% ( 10) 00:24:00.850 22.065 - 22.187: 98.7463% ( 7) 00:24:00.850 22.187 - 22.309: 98.8413% ( 10) 00:24:00.850 22.309 - 22.430: 98.8983% ( 6) 00:24:00.850 22.430 - 22.552: 99.0028% ( 11) 00:24:00.850 22.552 - 22.674: 99.0217% ( 2) 00:24:00.850 22.674 - 22.796: 99.0502% ( 3) 00:24:00.850 22.796 - 22.918: 99.0692% ( 2) 00:24:00.850 22.918 - 23.040: 99.1167% ( 5) 00:24:00.850 23.040 - 23.162: 99.1737% ( 6) 00:24:00.851 23.162 - 23.284: 99.2022% ( 3) 00:24:00.851 23.284 - 23.406: 99.2402% ( 4) 00:24:00.851 23.528 - 23.650: 99.2592% ( 2) 00:24:00.851 23.650 - 23.771: 99.2687% ( 1) 00:24:00.851 23.771 - 23.893: 99.2877% ( 2) 00:24:00.851 23.893 - 24.015: 99.3067% ( 2) 00:24:00.851 24.015 - 24.137: 99.3162% ( 1) 00:24:00.851 24.137 - 24.259: 99.3352% ( 2) 00:24:00.851 24.259 - 24.381: 99.3732% ( 4) 00:24:00.851 24.381 - 24.503: 99.3922% ( 2) 00:24:00.851 24.503 - 24.625: 99.4112% ( 2) 00:24:00.851 24.625 - 24.747: 99.4301% ( 2) 00:24:00.851 24.747 - 24.869: 99.4396% ( 1) 00:24:00.851 24.869 - 24.990: 99.4491% ( 1) 00:24:00.851 24.990 - 25.112: 99.4681% ( 2) 00:24:00.851 25.112 - 25.234: 99.5251% ( 6) 00:24:00.851 25.234 - 25.356: 99.5346% ( 1) 00:24:00.851 25.478 - 25.600: 99.5441% ( 1) 00:24:00.851 25.600 - 25.722: 99.5536% ( 1) 00:24:00.851 25.966 - 26.088: 99.5821% ( 3) 00:24:00.851 26.210 - 26.331: 99.6011% ( 2) 00:24:00.851 26.331 - 26.453: 99.6201% ( 2) 00:24:00.851 26.453 - 26.575: 99.6486% ( 3) 00:24:00.851 26.941 - 27.063: 99.6581% ( 1) 00:24:00.851 27.063 - 27.185: 99.6676% ( 1) 00:24:00.851 27.185 - 27.307: 99.6771% ( 1) 00:24:00.851 27.307 - 27.429: 99.6866% ( 1) 00:24:00.851 27.429 - 27.550: 99.6961% ( 1) 00:24:00.851 27.672 - 27.794: 99.7056% ( 1) 00:24:00.851 28.038 - 28.160: 99.7151% ( 1) 00:24:00.851 28.526 - 28.648: 99.7246% ( 1) 00:24:00.851 28.648 - 28.770: 99.7341% ( 1) 00:24:00.851 28.770 - 28.891: 99.7436% ( 1) 00:24:00.851 29.135 - 29.257: 99.7531% ( 1) 00:24:00.851 29.379 - 29.501: 99.7626% ( 1) 00:24:00.851 29.501 - 29.623: 99.7721% ( 1) 00:24:00.851 29.623 - 29.745: 99.7816% ( 1) 00:24:00.851 29.989 - 30.110: 99.8006% ( 2) 00:24:00.851 30.232 - 30.354: 99.8100% ( 1) 00:24:00.851 30.598 - 30.720: 99.8290% ( 2) 00:24:00.851 30.842 - 30.964: 99.8385% ( 1) 00:24:00.851 30.964 - 31.086: 99.8480% ( 1) 00:24:00.851 31.208 - 31.451: 99.8670% ( 2) 00:24:00.851 31.451 - 31.695: 99.8765% ( 1) 00:24:00.851 31.695 - 31.939: 99.8955% ( 2) 00:24:00.851 32.183 - 32.427: 99.9050% ( 1) 00:24:00.851 33.402 - 33.646: 99.9145% ( 1) 00:24:00.851 35.352 - 35.596: 99.9240% ( 1) 00:24:00.851 35.596 - 35.840: 99.9335% ( 1) 00:24:00.851 36.815 - 37.059: 99.9430% ( 1) 00:24:00.851 41.935 - 42.179: 99.9525% ( 1) 00:24:00.851 47.299 - 47.543: 99.9620% ( 1) 00:24:00.851 48.518 - 48.762: 99.9715% ( 1) 00:24:00.851 67.779 - 68.267: 99.9810% ( 1) 00:24:00.851 103.863 - 104.350: 99.9905% ( 1) 00:24:00.851 893.318 - 897.219: 100.0000% ( 1) 00:24:00.851 00:24:00.851 ************************************ 00:24:00.851 END TEST nvme_overhead 00:24:00.851 ************************************ 00:24:00.851 00:24:00.851 real 0m1.407s 00:24:00.851 user 0m1.150s 00:24:00.851 sys 0m0.194s 00:24:00.851 07:21:24 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.851 07:21:24 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:24:00.851 07:21:24 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:24:00.851 07:21:24 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:24:00.851 07:21:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.851 07:21:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:24:00.851 ************************************ 00:24:00.851 START TEST nvme_arbitration 00:24:00.851 ************************************ 00:24:00.851 07:21:24 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:24:05.069 Initializing NVMe Controllers 00:24:05.069 Attached to 0000:00:10.0 00:24:05.069 Attached to 0000:00:11.0 00:24:05.069 Attached to 0000:00:13.0 00:24:05.069 Attached to 0000:00:12.0 00:24:05.069 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:24:05.069 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:24:05.069 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:24:05.069 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:24:05.069 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:24:05.069 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:24:05.069 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:24:05.069 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:24:05.069 Initialization complete. Launching workers. 00:24:05.069 Starting thread on core 1 with urgent priority queue 00:24:05.069 Starting thread on core 2 with urgent priority queue 00:24:05.069 Starting thread on core 3 with urgent priority queue 00:24:05.069 Starting thread on core 0 with urgent priority queue 00:24:05.069 QEMU NVMe Ctrl (12340 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:24:05.069 QEMU NVMe Ctrl (12342 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:24:05.069 QEMU NVMe Ctrl (12341 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:24:05.069 QEMU NVMe Ctrl (12342 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:24:05.069 QEMU NVMe Ctrl (12343 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:24:05.069 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:24:05.069 ======================================================== 00:24:05.069 00:24:05.069 ************************************ 00:24:05.069 END TEST nvme_arbitration 00:24:05.069 ************************************ 00:24:05.069 00:24:05.069 real 0m3.527s 00:24:05.069 user 0m9.383s 00:24:05.069 sys 0m0.212s 00:24:05.069 07:21:28 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.069 07:21:28 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:24:05.069 07:21:28 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:24:05.069 07:21:28 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:05.069 07:21:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.069 07:21:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:24:05.069 ************************************ 00:24:05.069 START TEST nvme_single_aen 00:24:05.069 ************************************ 00:24:05.069 07:21:28 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:24:05.069 Asynchronous Event Request test 00:24:05.069 Attached to 0000:00:10.0 00:24:05.069 Attached to 0000:00:11.0 00:24:05.069 Attached to 0000:00:13.0 00:24:05.069 Attached to 0000:00:12.0 00:24:05.069 Reset controller to setup AER completions for this process 00:24:05.069 Registering asynchronous event callbacks... 00:24:05.069 Getting orig temperature thresholds of all controllers 00:24:05.069 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:05.069 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:05.069 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:05.069 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:05.069 Setting all controllers temperature threshold low to trigger AER 00:24:05.069 Waiting for all controllers temperature threshold to be set lower 00:24:05.069 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:05.069 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:24:05.069 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:05.069 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:24:05.069 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:05.069 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:24:05.069 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:05.069 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:24:05.069 Waiting for all controllers to trigger AER and reset threshold 00:24:05.069 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:05.069 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:05.069 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:05.069 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:05.069 Cleaning up... 00:24:05.069 ************************************ 00:24:05.069 END TEST nvme_single_aen 00:24:05.069 ************************************ 00:24:05.069 00:24:05.069 real 0m0.386s 00:24:05.069 user 0m0.142s 00:24:05.069 sys 0m0.197s 00:24:05.069 07:21:28 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.069 07:21:28 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:24:05.069 07:21:28 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:24:05.069 07:21:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:05.069 07:21:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.069 07:21:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:24:05.069 ************************************ 00:24:05.069 START TEST nvme_doorbell_aers 00:24:05.069 ************************************ 00:24:05.069 07:21:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:24:05.069 07:21:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:24:05.069 07:21:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:24:05.069 07:21:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:24:05.069 07:21:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:24:05.069 07:21:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:05.070 07:21:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:24:05.070 07:21:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:05.070 07:21:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:05.070 07:21:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:05.070 07:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:24:05.070 07:21:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:24:05.070 07:21:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:24:05.070 07:21:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:24:05.327 [2024-11-20 07:21:29.357742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:15.310 Executing: test_write_invalid_db 00:24:15.310 Waiting for AER completion... 00:24:15.310 Failure: test_write_invalid_db 00:24:15.310 00:24:15.310 Executing: test_invalid_db_write_overflow_sq 00:24:15.310 Waiting for AER completion... 00:24:15.310 Failure: test_invalid_db_write_overflow_sq 00:24:15.310 00:24:15.310 Executing: test_invalid_db_write_overflow_cq 00:24:15.310 Waiting for AER completion... 00:24:15.310 Failure: test_invalid_db_write_overflow_cq 00:24:15.310 00:24:15.310 07:21:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:24:15.310 07:21:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:24:15.310 [2024-11-20 07:21:39.399980] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:25.359 Executing: test_write_invalid_db 00:24:25.359 Waiting for AER completion... 00:24:25.359 Failure: test_write_invalid_db 00:24:25.359 00:24:25.359 Executing: test_invalid_db_write_overflow_sq 00:24:25.359 Waiting for AER completion... 00:24:25.359 Failure: test_invalid_db_write_overflow_sq 00:24:25.359 00:24:25.359 Executing: test_invalid_db_write_overflow_cq 00:24:25.359 Waiting for AER completion... 00:24:25.359 Failure: test_invalid_db_write_overflow_cq 00:24:25.359 00:24:25.359 07:21:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:24:25.359 07:21:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:24:25.359 [2024-11-20 07:21:49.500063] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:35.356 Executing: test_write_invalid_db 00:24:35.356 Waiting for AER completion... 00:24:35.356 Failure: test_write_invalid_db 00:24:35.356 00:24:35.356 Executing: test_invalid_db_write_overflow_sq 00:24:35.356 Waiting for AER completion... 00:24:35.356 Failure: test_invalid_db_write_overflow_sq 00:24:35.356 00:24:35.356 Executing: test_invalid_db_write_overflow_cq 00:24:35.356 Waiting for AER completion... 00:24:35.356 Failure: test_invalid_db_write_overflow_cq 00:24:35.356 00:24:35.356 07:21:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:24:35.356 07:21:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:24:35.356 [2024-11-20 07:21:59.529636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.425 Executing: test_write_invalid_db 00:24:45.425 Waiting for AER completion... 00:24:45.425 Failure: test_write_invalid_db 00:24:45.425 00:24:45.425 Executing: test_invalid_db_write_overflow_sq 00:24:45.425 Waiting for AER completion... 00:24:45.425 Failure: test_invalid_db_write_overflow_sq 00:24:45.425 00:24:45.425 Executing: test_invalid_db_write_overflow_cq 00:24:45.425 Waiting for AER completion... 00:24:45.425 Failure: test_invalid_db_write_overflow_cq 00:24:45.425 00:24:45.425 ************************************ 00:24:45.425 END TEST nvme_doorbell_aers 00:24:45.425 ************************************ 00:24:45.425 00:24:45.425 real 0m40.300s 00:24:45.425 user 0m28.502s 00:24:45.425 sys 0m11.386s 00:24:45.425 07:22:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.425 07:22:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:24:45.425 07:22:09 nvme -- nvme/nvme.sh@97 -- # uname 00:24:45.426 07:22:09 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:24:45.426 07:22:09 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:24:45.426 07:22:09 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:24:45.426 07:22:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.426 07:22:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:24:45.426 ************************************ 00:24:45.426 START TEST nvme_multi_aen 00:24:45.426 ************************************ 00:24:45.426 07:22:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:24:45.426 [2024-11-20 07:22:09.568554] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.568942] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.569087] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.571199] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.571382] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.571405] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.572773] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.572952] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.572975] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.574327] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.574373] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 [2024-11-20 07:22:09.574391] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:24:45.426 Child process pid: 65789 00:24:45.685 [Child] Asynchronous Event Request test 00:24:45.685 [Child] Attached to 0000:00:10.0 00:24:45.685 [Child] Attached to 0000:00:11.0 00:24:45.685 [Child] Attached to 0000:00:13.0 00:24:45.685 [Child] Attached to 0000:00:12.0 00:24:45.685 [Child] Registering asynchronous event callbacks... 00:24:45.685 [Child] Getting orig temperature thresholds of all controllers 00:24:45.685 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:45.685 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:45.685 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:45.685 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:45.685 [Child] Waiting for all controllers to trigger AER and reset threshold 00:24:45.685 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:45.685 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:45.685 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:45.685 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:45.685 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:45.685 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:45.685 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:45.685 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:45.685 [Child] Cleaning up... 00:24:45.944 Asynchronous Event Request test 00:24:45.944 Attached to 0000:00:10.0 00:24:45.944 Attached to 0000:00:11.0 00:24:45.944 Attached to 0000:00:13.0 00:24:45.944 Attached to 0000:00:12.0 00:24:45.944 Reset controller to setup AER completions for this process 00:24:45.944 Registering asynchronous event callbacks... 00:24:45.944 Getting orig temperature thresholds of all controllers 00:24:45.944 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:45.944 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:45.944 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:45.944 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:24:45.944 Setting all controllers temperature threshold low to trigger AER 00:24:45.944 Waiting for all controllers temperature threshold to be set lower 00:24:45.944 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:45.944 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:24:45.944 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:45.944 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:24:45.944 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:45.944 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:24:45.944 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:24:45.944 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:24:45.944 Waiting for all controllers to trigger AER and reset threshold 00:24:45.944 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:45.945 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:45.945 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:45.945 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:24:45.945 Cleaning up... 00:24:45.945 00:24:45.945 real 0m0.621s 00:24:45.945 user 0m0.220s 00:24:45.945 sys 0m0.294s 00:24:45.945 07:22:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.945 07:22:09 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:24:45.945 ************************************ 00:24:45.945 END TEST nvme_multi_aen 00:24:45.945 ************************************ 00:24:45.945 07:22:09 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:24:45.945 07:22:09 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:45.945 07:22:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.945 07:22:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:24:45.945 ************************************ 00:24:45.945 START TEST nvme_startup 00:24:45.945 ************************************ 00:24:45.945 07:22:09 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:24:46.203 Initializing NVMe Controllers 00:24:46.203 Attached to 0000:00:10.0 00:24:46.203 Attached to 0000:00:11.0 00:24:46.203 Attached to 0000:00:13.0 00:24:46.203 Attached to 0000:00:12.0 00:24:46.203 Initialization complete. 00:24:46.203 Time used:237884.359 (us). 00:24:46.203 ************************************ 00:24:46.203 END TEST nvme_startup 00:24:46.203 ************************************ 00:24:46.203 00:24:46.203 real 0m0.367s 00:24:46.203 user 0m0.140s 00:24:46.203 sys 0m0.180s 00:24:46.203 07:22:10 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.203 07:22:10 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:24:46.203 07:22:10 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:24:46.203 07:22:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:46.203 07:22:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.203 07:22:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:24:46.203 ************************************ 00:24:46.203 START TEST nvme_multi_secondary 00:24:46.203 ************************************ 00:24:46.203 07:22:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:24:46.203 07:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65845 00:24:46.203 07:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:24:46.203 07:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65846 00:24:46.203 07:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:24:46.203 07:22:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:24:50.397 Initializing NVMe Controllers 00:24:50.397 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:50.397 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:24:50.397 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:24:50.397 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:24:50.397 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:24:50.397 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:24:50.397 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:24:50.397 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:24:50.397 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:24:50.397 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:24:50.397 Initialization complete. Launching workers. 00:24:50.397 ======================================================== 00:24:50.397 Latency(us) 00:24:50.397 Device Information : IOPS MiB/s Average min max 00:24:50.397 PCIE (0000:00:10.0) NSID 1 from core 1: 4092.34 15.99 3907.57 1445.80 7706.21 00:24:50.397 PCIE (0000:00:11.0) NSID 1 from core 1: 4092.34 15.99 3909.40 1469.51 7733.18 00:24:50.397 PCIE (0000:00:13.0) NSID 1 from core 1: 4092.34 15.99 3909.30 1523.24 7691.64 00:24:50.397 PCIE (0000:00:12.0) NSID 1 from core 1: 4092.34 15.99 3909.26 1566.08 8110.69 00:24:50.397 PCIE (0000:00:12.0) NSID 2 from core 1: 4092.34 15.99 3909.07 1579.55 8012.26 00:24:50.397 PCIE (0000:00:12.0) NSID 3 from core 1: 4097.67 16.01 3903.87 1509.82 7297.67 00:24:50.397 ======================================================== 00:24:50.397 Total : 24559.39 95.94 3908.08 1445.80 8110.69 00:24:50.397 00:24:50.397 Initializing NVMe Controllers 00:24:50.397 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:50.397 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:24:50.397 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:24:50.397 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:24:50.397 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:24:50.397 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:24:50.397 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:24:50.397 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:24:50.397 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:24:50.397 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:24:50.397 Initialization complete. Launching workers. 00:24:50.397 ======================================================== 00:24:50.397 Latency(us) 00:24:50.397 Device Information : IOPS MiB/s Average min max 00:24:50.397 PCIE (0000:00:10.0) NSID 1 from core 2: 1917.19 7.49 8342.87 1803.49 17545.70 00:24:50.397 PCIE (0000:00:11.0) NSID 1 from core 2: 1917.19 7.49 8338.19 1486.57 17178.75 00:24:50.397 PCIE (0000:00:13.0) NSID 1 from core 2: 1917.19 7.49 8333.83 1948.93 17508.39 00:24:50.397 PCIE (0000:00:12.0) NSID 1 from core 2: 1917.19 7.49 8333.72 1773.02 16549.37 00:24:50.397 PCIE (0000:00:12.0) NSID 2 from core 2: 1917.19 7.49 8334.60 1834.28 16893.78 00:24:50.397 PCIE (0000:00:12.0) NSID 3 from core 2: 1917.19 7.49 8334.50 1809.40 17244.28 00:24:50.397 ======================================================== 00:24:50.397 Total : 11503.12 44.93 8336.28 1486.57 17545.70 00:24:50.397 00:24:50.397 07:22:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65845 00:24:51.794 Initializing NVMe Controllers 00:24:51.794 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:51.794 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:24:51.794 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:24:51.794 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:24:51.794 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:24:51.794 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:24:51.794 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:24:51.794 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:24:51.794 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:24:51.794 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:24:51.794 Initialization complete. Launching workers. 00:24:51.794 ======================================================== 00:24:51.794 Latency(us) 00:24:51.794 Device Information : IOPS MiB/s Average min max 00:24:51.794 PCIE (0000:00:10.0) NSID 1 from core 0: 6730.48 26.29 2375.64 976.13 8362.78 00:24:51.794 PCIE (0000:00:11.0) NSID 1 from core 0: 6727.48 26.28 2377.88 1019.39 9037.26 00:24:51.794 PCIE (0000:00:13.0) NSID 1 from core 0: 6727.48 26.28 2377.89 1008.00 9288.83 00:24:51.794 PCIE (0000:00:12.0) NSID 1 from core 0: 6727.48 26.28 2377.87 1008.04 9303.95 00:24:51.794 PCIE (0000:00:12.0) NSID 2 from core 0: 6730.68 26.29 2376.74 1005.18 7850.59 00:24:51.794 PCIE (0000:00:12.0) NSID 3 from core 0: 6730.68 26.29 2376.74 995.41 7595.70 00:24:51.794 ======================================================== 00:24:51.794 Total : 40374.27 157.71 2377.13 976.13 9303.95 00:24:51.794 00:24:51.794 07:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65846 00:24:51.794 07:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65921 00:24:51.794 07:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:24:51.794 07:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65922 00:24:51.794 07:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:24:51.794 07:22:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:24:55.086 Initializing NVMe Controllers 00:24:55.086 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:55.086 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:24:55.086 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:24:55.086 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:24:55.086 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:24:55.086 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:24:55.086 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:24:55.086 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:24:55.086 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:24:55.086 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:24:55.086 Initialization complete. Launching workers. 00:24:55.086 ======================================================== 00:24:55.086 Latency(us) 00:24:55.086 Device Information : IOPS MiB/s Average min max 00:24:55.086 PCIE (0000:00:10.0) NSID 1 from core 1: 4845.87 18.93 3299.86 1033.94 7134.61 00:24:55.086 PCIE (0000:00:11.0) NSID 1 from core 1: 4845.87 18.93 3301.27 1073.82 8554.32 00:24:55.086 PCIE (0000:00:13.0) NSID 1 from core 1: 4845.87 18.93 3301.36 1074.80 8132.10 00:24:55.086 PCIE (0000:00:12.0) NSID 1 from core 1: 4845.87 18.93 3301.32 1057.83 7603.42 00:24:55.086 PCIE (0000:00:12.0) NSID 2 from core 1: 4845.87 18.93 3301.53 1056.39 6486.40 00:24:55.086 PCIE (0000:00:12.0) NSID 3 from core 1: 4845.87 18.93 3301.63 1076.26 6841.97 00:24:55.086 ======================================================== 00:24:55.086 Total : 29075.22 113.58 3301.16 1033.94 8554.32 00:24:55.086 00:24:55.345 Initializing NVMe Controllers 00:24:55.345 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:55.345 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:24:55.345 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:24:55.345 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:24:55.345 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:24:55.345 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:24:55.345 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:24:55.345 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:24:55.345 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:24:55.345 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:24:55.345 Initialization complete. Launching workers. 00:24:55.345 ======================================================== 00:24:55.345 Latency(us) 00:24:55.345 Device Information : IOPS MiB/s Average min max 00:24:55.345 PCIE (0000:00:10.0) NSID 1 from core 0: 5352.50 20.91 2987.41 1039.05 7552.24 00:24:55.345 PCIE (0000:00:11.0) NSID 1 from core 0: 5352.50 20.91 2988.57 1074.37 7499.99 00:24:55.345 PCIE (0000:00:13.0) NSID 1 from core 0: 5352.50 20.91 2988.42 964.83 7384.19 00:24:55.345 PCIE (0000:00:12.0) NSID 1 from core 0: 5352.50 20.91 2988.28 921.89 7276.18 00:24:55.345 PCIE (0000:00:12.0) NSID 2 from core 0: 5352.50 20.91 2988.12 893.16 7231.02 00:24:55.345 PCIE (0000:00:12.0) NSID 3 from core 0: 5352.50 20.91 2987.97 871.68 6987.38 00:24:55.345 ======================================================== 00:24:55.345 Total : 32114.98 125.45 2988.13 871.68 7552.24 00:24:55.345 00:24:57.282 Initializing NVMe Controllers 00:24:57.282 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:24:57.282 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:24:57.282 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:24:57.282 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:24:57.282 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:24:57.282 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:24:57.282 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:24:57.282 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:24:57.282 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:24:57.282 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:24:57.282 Initialization complete. Launching workers. 00:24:57.282 ======================================================== 00:24:57.282 Latency(us) 00:24:57.282 Device Information : IOPS MiB/s Average min max 00:24:57.282 PCIE (0000:00:10.0) NSID 1 from core 2: 3113.12 12.16 5136.37 1088.98 14745.06 00:24:57.282 PCIE (0000:00:11.0) NSID 1 from core 2: 3113.12 12.16 5134.90 1112.05 14472.58 00:24:57.282 PCIE (0000:00:13.0) NSID 1 from core 2: 3113.12 12.16 5134.59 1118.22 19014.97 00:24:57.282 PCIE (0000:00:12.0) NSID 1 from core 2: 3113.12 12.16 5134.68 1002.11 14681.31 00:24:57.282 PCIE (0000:00:12.0) NSID 2 from core 2: 3113.12 12.16 5134.57 929.02 14351.16 00:24:57.282 PCIE (0000:00:12.0) NSID 3 from core 2: 3113.12 12.16 5134.47 846.91 16591.75 00:24:57.282 ======================================================== 00:24:57.282 Total : 18678.71 72.96 5134.93 846.91 19014.97 00:24:57.282 00:24:57.282 ************************************ 00:24:57.282 END TEST nvme_multi_secondary 00:24:57.282 ************************************ 00:24:57.282 07:22:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65921 00:24:57.282 07:22:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65922 00:24:57.282 00:24:57.282 real 0m10.942s 00:24:57.282 user 0m18.802s 00:24:57.282 sys 0m1.158s 00:24:57.282 07:22:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.282 07:22:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:24:57.282 07:22:21 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:24:57.282 07:22:21 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:24:57.282 07:22:21 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64841 ]] 00:24:57.282 07:22:21 nvme -- common/autotest_common.sh@1094 -- # kill 64841 00:24:57.282 07:22:21 nvme -- common/autotest_common.sh@1095 -- # wait 64841 00:24:57.282 [2024-11-20 07:22:21.391287] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.391400] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.391461] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.391499] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.395213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.395294] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.395328] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.395359] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.397873] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.397934] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.397954] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.397973] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.400558] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.400612] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.400629] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.282 [2024-11-20 07:22:21.400648] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65788) is not found. Dropping the request. 00:24:57.541 [2024-11-20 07:22:21.573003] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:24:57.541 07:22:21 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:24:57.541 07:22:21 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:24:57.541 07:22:21 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:24:57.541 07:22:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:57.541 07:22:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.541 07:22:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:24:57.541 ************************************ 00:24:57.541 START TEST bdev_nvme_reset_stuck_adm_cmd 00:24:57.541 ************************************ 00:24:57.541 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:24:57.541 * Looking for test storage... 00:24:57.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:57.541 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.541 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.541 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.800 --rc genhtml_branch_coverage=1 00:24:57.800 --rc genhtml_function_coverage=1 00:24:57.800 --rc genhtml_legend=1 00:24:57.800 --rc geninfo_all_blocks=1 00:24:57.800 --rc geninfo_unexecuted_blocks=1 00:24:57.800 00:24:57.800 ' 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.800 --rc genhtml_branch_coverage=1 00:24:57.800 --rc genhtml_function_coverage=1 00:24:57.800 --rc genhtml_legend=1 00:24:57.800 --rc geninfo_all_blocks=1 00:24:57.800 --rc geninfo_unexecuted_blocks=1 00:24:57.800 00:24:57.800 ' 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.800 --rc genhtml_branch_coverage=1 00:24:57.800 --rc genhtml_function_coverage=1 00:24:57.800 --rc genhtml_legend=1 00:24:57.800 --rc geninfo_all_blocks=1 00:24:57.800 --rc geninfo_unexecuted_blocks=1 00:24:57.800 00:24:57.800 ' 00:24:57.800 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.801 --rc genhtml_branch_coverage=1 00:24:57.801 --rc genhtml_function_coverage=1 00:24:57.801 --rc genhtml_legend=1 00:24:57.801 --rc geninfo_all_blocks=1 00:24:57.801 --rc geninfo_unexecuted_blocks=1 00:24:57.801 00:24:57.801 ' 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66091 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66091 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 66091 ']' 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.801 07:22:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:24:58.059 [2024-11-20 07:22:22.061504] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:24:58.059 [2024-11-20 07:22:22.062246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66091 ] 00:24:58.317 [2024-11-20 07:22:22.284629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.317 [2024-11-20 07:22:22.468209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.317 [2024-11-20 07:22:22.468309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.317 [2024-11-20 07:22:22.468381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.317 [2024-11-20 07:22:22.468388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 nvme0n1 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_5Xx04.txt 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:24:59.694 true 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732087343 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66115 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:24:59.694 07:22:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:25:01.600 [2024-11-20 07:22:25.654081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:25:01.600 [2024-11-20 07:22:25.654452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:01.600 [2024-11-20 07:22:25.654485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:25:01.600 [2024-11-20 07:22:25.654505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.600 [2024-11-20 07:22:25.656685] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller sWaiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66115 00:25:01.600 uccessful. 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66115 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66115 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_5Xx04.txt 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_5Xx04.txt 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66091 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 66091 ']' 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 66091 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66091 00:25:01.600 killing process with pid 66091 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66091' 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 66091 00:25:01.600 07:22:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 66091 00:25:04.885 07:22:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:25:04.885 07:22:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:25:04.885 00:25:04.885 real 0m7.111s 00:25:04.885 user 0m24.852s 00:25:04.885 sys 0m0.838s 00:25:04.885 ************************************ 00:25:04.885 END TEST bdev_nvme_reset_stuck_adm_cmd 00:25:04.885 ************************************ 00:25:04.885 07:22:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.885 07:22:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:25:04.885 07:22:28 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:25:04.885 07:22:28 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:25:04.885 07:22:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:04.885 07:22:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.885 07:22:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:04.885 ************************************ 00:25:04.885 START TEST nvme_fio 00:25:04.885 ************************************ 00:25:04.885 07:22:28 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:25:04.885 07:22:28 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:25:04.885 07:22:28 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:25:04.885 07:22:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:25:04.885 07:22:28 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:04.885 07:22:28 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:25:04.885 07:22:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:04.885 07:22:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:04.885 07:22:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:04.885 07:22:28 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:25:04.885 07:22:28 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:04.885 07:22:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:25:04.885 07:22:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:25:04.885 07:22:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:25:04.885 07:22:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:04.885 07:22:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:25:05.143 07:22:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:05.143 07:22:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:25:05.401 07:22:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:25:05.401 07:22:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:05.401 07:22:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:25:05.659 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:05.659 fio-3.35 00:25:05.659 Starting 1 thread 00:25:08.943 00:25:08.943 test: (groupid=0, jobs=1): err= 0: pid=66277: Wed Nov 20 07:22:32 2024 00:25:08.943 read: IOPS=16.0k, BW=62.4MiB/s (65.4MB/s)(125MiB/2001msec) 00:25:08.943 slat (nsec): min=4441, max=65439, avg=6385.04, stdev=1895.06 00:25:08.943 clat (usec): min=338, max=9659, avg=3985.38, stdev=733.78 00:25:08.943 lat (usec): min=345, max=9666, avg=3991.76, stdev=734.65 00:25:08.943 clat percentiles (usec): 00:25:08.943 | 1.00th=[ 2507], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3359], 00:25:08.943 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4146], 00:25:08.943 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4948], 00:25:08.943 | 99.00th=[ 7177], 99.50th=[ 8291], 99.90th=[ 8848], 99.95th=[ 9241], 00:25:08.943 | 99.99th=[ 9634] 00:25:08.943 bw ( KiB/s): min=59552, max=63936, per=97.43%, avg=62250.67, stdev=2361.14, samples=3 00:25:08.943 iops : min=14888, max=15984, avg=15562.67, stdev=590.29, samples=3 00:25:08.943 write: IOPS=16.0k, BW=62.5MiB/s (65.6MB/s)(125MiB/2001msec); 0 zone resets 00:25:08.943 slat (usec): min=4, max=128, avg= 6.73, stdev= 1.95 00:25:08.943 clat (usec): min=293, max=9654, avg=3989.31, stdev=731.53 00:25:08.943 lat (usec): min=299, max=9661, avg=3996.04, stdev=732.38 00:25:08.943 clat percentiles (usec): 00:25:08.943 | 1.00th=[ 2474], 5.00th=[ 3097], 10.00th=[ 3195], 20.00th=[ 3359], 00:25:08.943 | 30.00th=[ 3818], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4146], 00:25:08.943 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4948], 00:25:08.943 | 99.00th=[ 7111], 99.50th=[ 8094], 99.90th=[ 8717], 99.95th=[ 8848], 00:25:08.943 | 99.99th=[ 9634] 00:25:08.943 bw ( KiB/s): min=59864, max=63424, per=96.50%, avg=61800.00, stdev=1800.39, samples=3 00:25:08.943 iops : min=14966, max=15856, avg=15450.00, stdev=450.10, samples=3 00:25:08.943 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:25:08.943 lat (msec) : 2=0.24%, 4=41.03%, 10=58.70% 00:25:08.943 cpu : usr=98.90%, sys=0.25%, ctx=2, majf=0, minf=607 00:25:08.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:08.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:08.943 issued rwts: total=31961,32036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:08.943 00:25:08.943 Run status group 0 (all jobs): 00:25:08.943 READ: bw=62.4MiB/s (65.4MB/s), 62.4MiB/s-62.4MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec 00:25:08.943 WRITE: bw=62.5MiB/s (65.6MB/s), 62.5MiB/s-62.5MiB/s (65.6MB/s-65.6MB/s), io=125MiB (131MB), run=2001-2001msec 00:25:08.943 ----------------------------------------------------- 00:25:08.943 Suppressions used: 00:25:08.943 count bytes template 00:25:08.943 1 32 /usr/src/fio/parse.c 00:25:08.943 1 8 libtcmalloc_minimal.so 00:25:08.943 ----------------------------------------------------- 00:25:08.943 00:25:09.201 07:22:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:25:09.201 07:22:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:25:09.201 07:22:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:25:09.201 07:22:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:25:09.460 07:22:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:25:09.460 07:22:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:25:09.719 07:22:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:25:09.719 07:22:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:09.719 07:22:33 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:25:09.978 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:09.978 fio-3.35 00:25:09.978 Starting 1 thread 00:25:14.210 00:25:14.210 test: (groupid=0, jobs=1): err= 0: pid=66350: Wed Nov 20 07:22:37 2024 00:25:14.210 read: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(110MiB/2001msec) 00:25:14.210 slat (usec): min=4, max=116, avg= 7.04, stdev= 2.40 00:25:14.210 clat (usec): min=345, max=8914, avg=4554.69, stdev=996.12 00:25:14.210 lat (usec): min=352, max=8926, avg=4561.74, stdev=997.19 00:25:14.210 clat percentiles (usec): 00:25:14.210 | 1.00th=[ 2802], 5.00th=[ 3195], 10.00th=[ 3490], 20.00th=[ 3884], 00:25:14.210 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4424], 00:25:14.210 | 70.00th=[ 4686], 80.00th=[ 5342], 90.00th=[ 6194], 95.00th=[ 6456], 00:25:14.210 | 99.00th=[ 7177], 99.50th=[ 7439], 99.90th=[ 8717], 99.95th=[ 8717], 00:25:14.210 | 99.99th=[ 8848] 00:25:14.210 bw ( KiB/s): min=50416, max=60856, per=98.27%, avg=55088.00, stdev=5305.59, samples=3 00:25:14.210 iops : min=12606, max=15214, avg=13772.67, stdev=1325.52, samples=3 00:25:14.210 write: IOPS=14.0k, BW=54.8MiB/s (57.4MB/s)(110MiB/2001msec); 0 zone resets 00:25:14.210 slat (nsec): min=4680, max=87461, avg=7227.13, stdev=2370.25 00:25:14.210 clat (usec): min=310, max=8949, avg=4547.93, stdev=999.49 00:25:14.210 lat (usec): min=317, max=8962, avg=4555.16, stdev=1000.54 00:25:14.210 clat percentiles (usec): 00:25:14.210 | 1.00th=[ 2835], 5.00th=[ 3195], 10.00th=[ 3458], 20.00th=[ 3884], 00:25:14.210 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4424], 00:25:14.210 | 70.00th=[ 4686], 80.00th=[ 5342], 90.00th=[ 6194], 95.00th=[ 6456], 00:25:14.210 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8717], 99.95th=[ 8848], 00:25:14.210 | 99.99th=[ 8979] 00:25:14.210 bw ( KiB/s): min=50896, max=61096, per=98.37%, avg=55168.00, stdev=5297.81, samples=3 00:25:14.210 iops : min=12724, max=15274, avg=13792.00, stdev=1324.45, samples=3 00:25:14.210 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:25:14.210 lat (msec) : 2=0.12%, 4=26.06%, 10=73.78% 00:25:14.210 cpu : usr=98.90%, sys=0.00%, ctx=3, majf=0, minf=607 00:25:14.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:14.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:14.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:14.210 issued rwts: total=28042,28055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:14.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:14.210 00:25:14.210 Run status group 0 (all jobs): 00:25:14.210 READ: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=110MiB (115MB), run=2001-2001msec 00:25:14.210 WRITE: bw=54.8MiB/s (57.4MB/s), 54.8MiB/s-54.8MiB/s (57.4MB/s-57.4MB/s), io=110MiB (115MB), run=2001-2001msec 00:25:14.210 ----------------------------------------------------- 00:25:14.210 Suppressions used: 00:25:14.210 count bytes template 00:25:14.210 1 32 /usr/src/fio/parse.c 00:25:14.210 1 8 libtcmalloc_minimal.so 00:25:14.210 ----------------------------------------------------- 00:25:14.210 00:25:14.210 07:22:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:25:14.210 07:22:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:25:14.210 07:22:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:25:14.210 07:22:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:25:14.210 07:22:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:25:14.210 07:22:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:25:14.469 07:22:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:25:14.469 07:22:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:14.469 07:22:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:25:14.727 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:14.727 fio-3.35 00:25:14.727 Starting 1 thread 00:25:18.011 00:25:18.012 test: (groupid=0, jobs=1): err= 0: pid=66411: Wed Nov 20 07:22:42 2024 00:25:18.012 read: IOPS=16.2k, BW=63.2MiB/s (66.3MB/s)(126MiB/2001msec) 00:25:18.012 slat (nsec): min=4544, max=71161, avg=6103.28, stdev=1774.90 00:25:18.012 clat (usec): min=387, max=9254, avg=3935.62, stdev=772.10 00:25:18.012 lat (usec): min=393, max=9325, avg=3941.72, stdev=773.00 00:25:18.012 clat percentiles (usec): 00:25:18.012 | 1.00th=[ 2933], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3261], 00:25:18.012 | 30.00th=[ 3326], 40.00th=[ 3458], 50.00th=[ 3884], 60.00th=[ 4228], 00:25:18.012 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5080], 00:25:18.012 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 7701], 99.95th=[ 7832], 00:25:18.012 | 99.99th=[ 8979] 00:25:18.012 bw ( KiB/s): min=57440, max=74368, per=97.91%, avg=63370.67, stdev=9533.76, samples=3 00:25:18.012 iops : min=14360, max=18592, avg=15842.67, stdev=2383.44, samples=3 00:25:18.012 write: IOPS=16.2k, BW=63.3MiB/s (66.4MB/s)(127MiB/2001msec); 0 zone resets 00:25:18.012 slat (nsec): min=4654, max=54147, avg=6246.11, stdev=1779.11 00:25:18.012 clat (usec): min=290, max=9038, avg=3931.82, stdev=765.89 00:25:18.012 lat (usec): min=296, max=9064, avg=3938.07, stdev=766.78 00:25:18.012 clat percentiles (usec): 00:25:18.012 | 1.00th=[ 2966], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3261], 00:25:18.012 | 30.00th=[ 3326], 40.00th=[ 3458], 50.00th=[ 3851], 60.00th=[ 4228], 00:25:18.012 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5080], 00:25:18.012 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7701], 99.95th=[ 7963], 00:25:18.012 | 99.99th=[ 8717] 00:25:18.012 bw ( KiB/s): min=56640, max=73936, per=97.31%, avg=63074.67, stdev=9459.62, samples=3 00:25:18.012 iops : min=14160, max=18484, avg=15768.67, stdev=2364.91, samples=3 00:25:18.012 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:25:18.012 lat (msec) : 2=0.08%, 4=51.26%, 10=48.62% 00:25:18.012 cpu : usr=99.10%, sys=0.10%, ctx=5, majf=0, minf=607 00:25:18.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:18.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:18.012 issued rwts: total=32379,32424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:18.012 00:25:18.012 Run status group 0 (all jobs): 00:25:18.012 READ: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=126MiB (133MB), run=2001-2001msec 00:25:18.012 WRITE: bw=63.3MiB/s (66.4MB/s), 63.3MiB/s-63.3MiB/s (66.4MB/s-66.4MB/s), io=127MiB (133MB), run=2001-2001msec 00:25:18.271 ----------------------------------------------------- 00:25:18.271 Suppressions used: 00:25:18.271 count bytes template 00:25:18.271 1 32 /usr/src/fio/parse.c 00:25:18.271 1 8 libtcmalloc_minimal.so 00:25:18.271 ----------------------------------------------------- 00:25:18.271 00:25:18.271 07:22:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:25:18.271 07:22:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:25:18.271 07:22:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:25:18.271 07:22:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:25:18.838 07:22:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:25:18.838 07:22:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:25:19.143 07:22:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:25:19.143 07:22:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:19.143 07:22:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:25:19.402 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:19.402 fio-3.35 00:25:19.402 Starting 1 thread 00:25:23.591 00:25:23.591 test: (groupid=0, jobs=1): err= 0: pid=66477: Wed Nov 20 07:22:47 2024 00:25:23.591 read: IOPS=15.1k, BW=58.8MiB/s (61.7MB/s)(118MiB/2001msec) 00:25:23.591 slat (nsec): min=4576, max=73100, avg=6440.73, stdev=2051.62 00:25:23.591 clat (usec): min=216, max=8938, avg=4229.24, stdev=923.82 00:25:23.591 lat (usec): min=222, max=8983, avg=4235.68, stdev=924.81 00:25:23.592 clat percentiles (usec): 00:25:23.592 | 1.00th=[ 2704], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 3359], 00:25:23.592 | 30.00th=[ 3523], 40.00th=[ 3982], 50.00th=[ 4293], 60.00th=[ 4424], 00:25:23.592 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5276], 95.00th=[ 6194], 00:25:23.592 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8160], 99.95th=[ 8291], 00:25:23.592 | 99.99th=[ 8717] 00:25:23.592 bw ( KiB/s): min=54208, max=61840, per=98.08%, avg=59101.33, stdev=4247.80, samples=3 00:25:23.592 iops : min=13552, max=15460, avg=14775.33, stdev=1061.95, samples=3 00:25:23.592 write: IOPS=15.1k, BW=58.9MiB/s (61.7MB/s)(118MiB/2001msec); 0 zone resets 00:25:23.592 slat (nsec): min=4675, max=54637, avg=6618.96, stdev=1972.92 00:25:23.592 clat (usec): min=234, max=8805, avg=4236.52, stdev=936.17 00:25:23.592 lat (usec): min=239, max=8818, avg=4243.14, stdev=937.13 00:25:23.592 clat percentiles (usec): 00:25:23.592 | 1.00th=[ 2704], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 3359], 00:25:23.592 | 30.00th=[ 3523], 40.00th=[ 3982], 50.00th=[ 4293], 60.00th=[ 4424], 00:25:23.592 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5276], 95.00th=[ 6325], 00:25:23.592 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8160], 99.95th=[ 8291], 00:25:23.592 | 99.99th=[ 8586] 00:25:23.592 bw ( KiB/s): min=54048, max=62216, per=97.77%, avg=58952.00, stdev=4323.92, samples=3 00:25:23.592 iops : min=13512, max=15554, avg=14738.00, stdev=1080.98, samples=3 00:25:23.592 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:25:23.592 lat (msec) : 2=0.08%, 4=40.06%, 10=59.81% 00:25:23.592 cpu : usr=99.00%, sys=0.10%, ctx=3, majf=0, minf=606 00:25:23.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:23.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:23.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:23.592 issued rwts: total=30145,30162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:23.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:23.592 00:25:23.592 Run status group 0 (all jobs): 00:25:23.592 READ: bw=58.8MiB/s (61.7MB/s), 58.8MiB/s-58.8MiB/s (61.7MB/s-61.7MB/s), io=118MiB (123MB), run=2001-2001msec 00:25:23.592 WRITE: bw=58.9MiB/s (61.7MB/s), 58.9MiB/s-58.9MiB/s (61.7MB/s-61.7MB/s), io=118MiB (124MB), run=2001-2001msec 00:25:23.850 ----------------------------------------------------- 00:25:23.850 Suppressions used: 00:25:23.850 count bytes template 00:25:23.850 1 32 /usr/src/fio/parse.c 00:25:23.850 1 8 libtcmalloc_minimal.so 00:25:23.850 ----------------------------------------------------- 00:25:23.850 00:25:23.850 07:22:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:25:23.850 07:22:48 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:25:23.850 00:25:23.850 real 0m19.280s 00:25:23.850 user 0m14.955s 00:25:23.850 sys 0m3.730s 00:25:23.850 07:22:48 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.850 ************************************ 00:25:23.850 END TEST nvme_fio 00:25:23.850 07:22:48 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:25:24.109 ************************************ 00:25:24.109 00:25:24.109 real 1m36.187s 00:25:24.109 user 3m48.184s 00:25:24.109 sys 0m23.716s 00:25:24.109 07:22:48 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.109 07:22:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:24.109 ************************************ 00:25:24.109 END TEST nvme 00:25:24.109 ************************************ 00:25:24.109 07:22:48 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:25:24.109 07:22:48 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:25:24.109 07:22:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:24.109 07:22:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.109 07:22:48 -- common/autotest_common.sh@10 -- # set +x 00:25:24.109 ************************************ 00:25:24.109 START TEST nvme_scc 00:25:24.109 ************************************ 00:25:24.109 07:22:48 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:25:24.109 * Looking for test storage... 00:25:24.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:25:24.109 07:22:48 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:24.109 07:22:48 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:24.109 07:22:48 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:24.409 07:22:48 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@345 -- # : 1 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.409 07:22:48 nvme_scc -- scripts/common.sh@368 -- # return 0 00:25:24.409 07:22:48 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.409 07:22:48 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:24.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.409 --rc genhtml_branch_coverage=1 00:25:24.409 --rc genhtml_function_coverage=1 00:25:24.409 --rc genhtml_legend=1 00:25:24.409 --rc geninfo_all_blocks=1 00:25:24.409 --rc geninfo_unexecuted_blocks=1 00:25:24.409 00:25:24.409 ' 00:25:24.409 07:22:48 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:24.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.409 --rc genhtml_branch_coverage=1 00:25:24.409 --rc genhtml_function_coverage=1 00:25:24.409 --rc genhtml_legend=1 00:25:24.409 --rc geninfo_all_blocks=1 00:25:24.409 --rc geninfo_unexecuted_blocks=1 00:25:24.409 00:25:24.409 ' 00:25:24.409 07:22:48 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:24.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.409 --rc genhtml_branch_coverage=1 00:25:24.409 --rc genhtml_function_coverage=1 00:25:24.409 --rc genhtml_legend=1 00:25:24.409 --rc geninfo_all_blocks=1 00:25:24.409 --rc geninfo_unexecuted_blocks=1 00:25:24.409 00:25:24.409 ' 00:25:24.409 07:22:48 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:24.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.410 --rc genhtml_branch_coverage=1 00:25:24.410 --rc genhtml_function_coverage=1 00:25:24.410 --rc genhtml_legend=1 00:25:24.410 --rc geninfo_all_blocks=1 00:25:24.410 --rc geninfo_unexecuted_blocks=1 00:25:24.410 00:25:24.410 ' 00:25:24.410 07:22:48 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:24.410 07:22:48 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.410 07:22:48 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.410 07:22:48 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.410 07:22:48 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.410 07:22:48 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.410 07:22:48 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.410 07:22:48 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.410 07:22:48 nvme_scc -- paths/export.sh@5 -- # export PATH 00:25:24.410 07:22:48 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:25:24.410 07:22:48 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:25:24.410 07:22:48 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:24.410 07:22:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:25:24.410 07:22:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:25:24.410 07:22:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:25:24.410 07:22:48 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:24.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.928 Waiting for block devices as requested 00:25:24.928 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:24.928 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:25.186 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:25.186 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:30.505 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:30.505 07:22:54 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:25:30.505 07:22:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:25:30.505 07:22:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:30.505 07:22:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:30.505 07:22:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.505 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.506 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.507 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:25:30.508 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.509 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:25:30.510 07:22:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:25:30.510 07:22:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:25:30.510 07:22:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:30.511 07:22:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:30.511 07:22:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:25:30.511 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.512 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.513 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.514 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:25:30.515 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:25:30.516 07:22:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:25:30.516 07:22:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:25:30.516 07:22:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:30.516 07:22:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.516 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:25:30.779 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.780 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.781 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:25:30.782 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.783 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.784 07:22:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:25:30.785 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.786 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:25:30.787 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:25:30.788 07:22:54 nvme_scc -- scripts/common.sh@18 -- # local i 00:25:30.788 07:22:54 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:25:30.788 07:22:54 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:30.788 07:22:54 nvme_scc -- scripts/common.sh@27 -- # return 0 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@18 -- # shift 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:25:30.788 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.789 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:30.790 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.050 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:25:31.051 07:22:54 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:25:31.051 07:22:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:25:31.051 07:22:55 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:25:31.052 07:22:55 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:25:31.052 07:22:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:25:31.052 07:22:55 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:25:31.052 07:22:55 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:31.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:32.184 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:32.184 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:32.184 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:32.184 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:32.441 07:22:56 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:25:32.441 07:22:56 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:32.441 07:22:56 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.441 07:22:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:25:32.441 ************************************ 00:25:32.442 START TEST nvme_simple_copy 00:25:32.442 ************************************ 00:25:32.442 07:22:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:25:32.699 Initializing NVMe Controllers 00:25:32.699 Attaching to 0000:00:10.0 00:25:32.699 Controller supports SCC. Attached to 0000:00:10.0 00:25:32.699 Namespace ID: 1 size: 6GB 00:25:32.699 Initialization complete. 00:25:32.699 00:25:32.699 Controller QEMU NVMe Ctrl (12340 ) 00:25:32.699 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:25:32.699 Namespace Block Size:4096 00:25:32.699 Writing LBAs 0 to 63 with Random Data 00:25:32.699 Copied LBAs from 0 - 63 to the Destination LBA 256 00:25:32.699 LBAs matching Written Data: 64 00:25:32.699 00:25:32.699 real 0m0.393s 00:25:32.699 user 0m0.163s 00:25:32.699 sys 0m0.127s 00:25:32.699 07:22:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.699 07:22:56 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:25:32.699 ************************************ 00:25:32.699 END TEST nvme_simple_copy 00:25:32.699 ************************************ 00:25:32.699 00:25:32.699 real 0m8.742s 00:25:32.699 user 0m1.717s 00:25:32.699 sys 0m2.054s 00:25:32.699 07:22:56 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.699 ************************************ 00:25:32.699 END TEST nvme_scc 00:25:32.699 07:22:56 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:25:32.699 ************************************ 00:25:32.958 07:22:56 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:25:32.958 07:22:56 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:25:32.958 07:22:56 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:25:32.958 07:22:56 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:25:32.958 07:22:56 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:25:32.958 07:22:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:32.958 07:22:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.958 07:22:56 -- common/autotest_common.sh@10 -- # set +x 00:25:32.958 ************************************ 00:25:32.958 START TEST nvme_fdp 00:25:32.958 ************************************ 00:25:32.958 07:22:56 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:25:32.958 * Looking for test storage... 00:25:32.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.958 --rc genhtml_branch_coverage=1 00:25:32.958 --rc genhtml_function_coverage=1 00:25:32.958 --rc genhtml_legend=1 00:25:32.958 --rc geninfo_all_blocks=1 00:25:32.958 --rc geninfo_unexecuted_blocks=1 00:25:32.958 00:25:32.958 ' 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.958 --rc genhtml_branch_coverage=1 00:25:32.958 --rc genhtml_function_coverage=1 00:25:32.958 --rc genhtml_legend=1 00:25:32.958 --rc geninfo_all_blocks=1 00:25:32.958 --rc geninfo_unexecuted_blocks=1 00:25:32.958 00:25:32.958 ' 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.958 --rc genhtml_branch_coverage=1 00:25:32.958 --rc genhtml_function_coverage=1 00:25:32.958 --rc genhtml_legend=1 00:25:32.958 --rc geninfo_all_blocks=1 00:25:32.958 --rc geninfo_unexecuted_blocks=1 00:25:32.958 00:25:32.958 ' 00:25:32.958 07:22:57 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.958 --rc genhtml_branch_coverage=1 00:25:32.958 --rc genhtml_function_coverage=1 00:25:32.958 --rc genhtml_legend=1 00:25:32.958 --rc geninfo_all_blocks=1 00:25:32.958 --rc geninfo_unexecuted_blocks=1 00:25:32.958 00:25:32.958 ' 00:25:32.958 07:22:57 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:25:32.958 07:22:57 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:25:32.958 07:22:57 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:25:32.958 07:22:57 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:32.958 07:22:57 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.958 07:22:57 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.958 07:22:57 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.958 07:22:57 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.959 07:22:57 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.959 07:22:57 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:25:32.959 07:22:57 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:25:32.959 07:22:57 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:25:32.959 07:22:57 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:32.959 07:22:57 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:33.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:33.525 Waiting for block devices as requested 00:25:33.525 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:33.783 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:33.783 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:33.783 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:39.081 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:39.081 07:23:03 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:25:39.081 07:23:03 nvme_fdp -- scripts/common.sh@18 -- # local i 00:25:39.081 07:23:03 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:39.081 07:23:03 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:39.081 07:23:03 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.081 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.082 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.083 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.084 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:25:39.085 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:25:39.086 07:23:03 nvme_fdp -- scripts/common.sh@18 -- # local i 00:25:39.086 07:23:03 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:39.086 07:23:03 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:39.086 07:23:03 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.086 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.087 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.088 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.089 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:25:39.375 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:25:39.376 07:23:03 nvme_fdp -- scripts/common.sh@18 -- # local i 00:25:39.376 07:23:03 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:25:39.376 07:23:03 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:39.376 07:23:03 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:25:39.376 07:23:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.377 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:25:39.378 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:25:39.379 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.380 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.381 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:39.382 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.383 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:25:39.384 07:23:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:25:39.385 07:23:03 nvme_fdp -- scripts/common.sh@18 -- # local i 00:25:39.385 07:23:03 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:25:39.385 07:23:03 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:39.385 07:23:03 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.385 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:25:39.386 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.387 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:25:39.388 07:23:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:25:39.388 07:23:03 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:25:39.388 07:23:03 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:25:39.388 07:23:03 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:39.956 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:40.522 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.780 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.780 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.780 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:40.780 07:23:04 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:25:40.781 07:23:04 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:40.781 07:23:04 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:40.781 07:23:04 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:25:40.781 ************************************ 00:25:40.781 START TEST nvme_flexible_data_placement 00:25:40.781 ************************************ 00:25:40.781 07:23:04 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:25:41.348 Initializing NVMe Controllers 00:25:41.348 Attaching to 0000:00:13.0 00:25:41.348 Controller supports FDP Attached to 0000:00:13.0 00:25:41.348 Namespace ID: 1 Endurance Group ID: 1 00:25:41.348 Initialization complete. 00:25:41.348 00:25:41.348 ================================== 00:25:41.348 == FDP tests for Namespace: #01 == 00:25:41.348 ================================== 00:25:41.348 00:25:41.348 Get Feature: FDP: 00:25:41.348 ================= 00:25:41.348 Enabled: Yes 00:25:41.348 FDP configuration Index: 0 00:25:41.348 00:25:41.348 FDP configurations log page 00:25:41.348 =========================== 00:25:41.348 Number of FDP configurations: 1 00:25:41.348 Version: 0 00:25:41.348 Size: 112 00:25:41.348 FDP Configuration Descriptor: 0 00:25:41.348 Descriptor Size: 96 00:25:41.348 Reclaim Group Identifier format: 2 00:25:41.348 FDP Volatile Write Cache: Not Present 00:25:41.348 FDP Configuration: Valid 00:25:41.348 Vendor Specific Size: 0 00:25:41.348 Number of Reclaim Groups: 2 00:25:41.348 Number of Recalim Unit Handles: 8 00:25:41.348 Max Placement Identifiers: 128 00:25:41.348 Number of Namespaces Suppprted: 256 00:25:41.348 Reclaim unit Nominal Size: 6000000 bytes 00:25:41.348 Estimated Reclaim Unit Time Limit: Not Reported 00:25:41.348 RUH Desc #000: RUH Type: Initially Isolated 00:25:41.348 RUH Desc #001: RUH Type: Initially Isolated 00:25:41.348 RUH Desc #002: RUH Type: Initially Isolated 00:25:41.348 RUH Desc #003: RUH Type: Initially Isolated 00:25:41.348 RUH Desc #004: RUH Type: Initially Isolated 00:25:41.348 RUH Desc #005: RUH Type: Initially Isolated 00:25:41.348 RUH Desc #006: RUH Type: Initially Isolated 00:25:41.348 RUH Desc #007: RUH Type: Initially Isolated 00:25:41.348 00:25:41.348 FDP reclaim unit handle usage log page 00:25:41.348 ====================================== 00:25:41.348 Number of Reclaim Unit Handles: 8 00:25:41.348 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:25:41.348 RUH Usage Desc #001: RUH Attributes: Unused 00:25:41.348 RUH Usage Desc #002: RUH Attributes: Unused 00:25:41.348 RUH Usage Desc #003: RUH Attributes: Unused 00:25:41.348 RUH Usage Desc #004: RUH Attributes: Unused 00:25:41.348 RUH Usage Desc #005: RUH Attributes: Unused 00:25:41.348 RUH Usage Desc #006: RUH Attributes: Unused 00:25:41.348 RUH Usage Desc #007: RUH Attributes: Unused 00:25:41.348 00:25:41.348 FDP statistics log page 00:25:41.348 ======================= 00:25:41.348 Host bytes with metadata written: 733794304 00:25:41.348 Media bytes with metadata written: 733872128 00:25:41.348 Media bytes erased: 0 00:25:41.348 00:25:41.348 FDP Reclaim unit handle status 00:25:41.348 ============================== 00:25:41.348 Number of RUHS descriptors: 2 00:25:41.348 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004433 00:25:41.348 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:25:41.348 00:25:41.348 FDP write on placement id: 0 success 00:25:41.348 00:25:41.348 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:25:41.348 00:25:41.348 IO mgmt send: RUH update for Placement ID: #0 Success 00:25:41.348 00:25:41.348 Get Feature: FDP Events for Placement handle: #0 00:25:41.348 ======================== 00:25:41.348 Number of FDP Events: 6 00:25:41.348 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:25:41.348 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:25:41.348 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:25:41.348 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:25:41.348 FDP Event: #4 Type: Media Reallocated Enabled: No 00:25:41.348 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:25:41.348 00:25:41.348 FDP events log page 00:25:41.348 =================== 00:25:41.348 Number of FDP events: 1 00:25:41.348 FDP Event #0: 00:25:41.348 Event Type: RU Not Written to Capacity 00:25:41.348 Placement Identifier: Valid 00:25:41.348 NSID: Valid 00:25:41.348 Location: Valid 00:25:41.348 Placement Identifier: 0 00:25:41.348 Event Timestamp: 9 00:25:41.348 Namespace Identifier: 1 00:25:41.348 Reclaim Group Identifier: 0 00:25:41.348 Reclaim Unit Handle Identifier: 0 00:25:41.348 00:25:41.348 FDP test passed 00:25:41.348 00:25:41.348 real 0m0.375s 00:25:41.348 user 0m0.176s 00:25:41.348 sys 0m0.097s 00:25:41.348 07:23:05 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.348 07:23:05 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:25:41.348 ************************************ 00:25:41.348 END TEST nvme_flexible_data_placement 00:25:41.348 ************************************ 00:25:41.348 00:25:41.348 real 0m8.386s 00:25:41.348 user 0m1.478s 00:25:41.348 sys 0m1.907s 00:25:41.348 07:23:05 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.348 07:23:05 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:25:41.348 ************************************ 00:25:41.348 END TEST nvme_fdp 00:25:41.348 ************************************ 00:25:41.348 07:23:05 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:25:41.348 07:23:05 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:25:41.348 07:23:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:41.348 07:23:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.348 07:23:05 -- common/autotest_common.sh@10 -- # set +x 00:25:41.348 ************************************ 00:25:41.348 START TEST nvme_rpc 00:25:41.348 ************************************ 00:25:41.348 07:23:05 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:25:41.348 * Looking for test storage... 00:25:41.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:25:41.348 07:23:05 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:41.348 07:23:05 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:41.348 07:23:05 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:41.348 07:23:05 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.348 07:23:05 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.349 07:23:05 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:41.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.349 --rc genhtml_branch_coverage=1 00:25:41.349 --rc genhtml_function_coverage=1 00:25:41.349 --rc genhtml_legend=1 00:25:41.349 --rc geninfo_all_blocks=1 00:25:41.349 --rc geninfo_unexecuted_blocks=1 00:25:41.349 00:25:41.349 ' 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:41.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.349 --rc genhtml_branch_coverage=1 00:25:41.349 --rc genhtml_function_coverage=1 00:25:41.349 --rc genhtml_legend=1 00:25:41.349 --rc geninfo_all_blocks=1 00:25:41.349 --rc geninfo_unexecuted_blocks=1 00:25:41.349 00:25:41.349 ' 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:41.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.349 --rc genhtml_branch_coverage=1 00:25:41.349 --rc genhtml_function_coverage=1 00:25:41.349 --rc genhtml_legend=1 00:25:41.349 --rc geninfo_all_blocks=1 00:25:41.349 --rc geninfo_unexecuted_blocks=1 00:25:41.349 00:25:41.349 ' 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:41.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.349 --rc genhtml_branch_coverage=1 00:25:41.349 --rc genhtml_function_coverage=1 00:25:41.349 --rc genhtml_legend=1 00:25:41.349 --rc geninfo_all_blocks=1 00:25:41.349 --rc geninfo_unexecuted_blocks=1 00:25:41.349 00:25:41.349 ' 00:25:41.349 07:23:05 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:41.349 07:23:05 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:41.349 07:23:05 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:25:41.607 07:23:05 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:25:41.607 07:23:05 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67849 00:25:41.607 07:23:05 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:25:41.607 07:23:05 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:25:41.607 07:23:05 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67849 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67849 ']' 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.607 07:23:05 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:41.607 [2024-11-20 07:23:05.732247] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:41.607 [2024-11-20 07:23:05.732957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67849 ] 00:25:41.864 [2024-11-20 07:23:05.912597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:41.864 [2024-11-20 07:23:06.048414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.864 [2024-11-20 07:23:06.048447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.298 07:23:07 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.298 07:23:07 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:43.298 07:23:07 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:25:43.298 Nvme0n1 00:25:43.298 07:23:07 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:25:43.298 07:23:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:25:43.556 request: 00:25:43.556 { 00:25:43.556 "bdev_name": "Nvme0n1", 00:25:43.556 "filename": "non_existing_file", 00:25:43.556 "method": "bdev_nvme_apply_firmware", 00:25:43.556 "req_id": 1 00:25:43.556 } 00:25:43.556 Got JSON-RPC error response 00:25:43.556 response: 00:25:43.556 { 00:25:43.556 "code": -32603, 00:25:43.556 "message": "open file failed." 00:25:43.556 } 00:25:43.556 07:23:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:25:43.556 07:23:07 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:25:43.556 07:23:07 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:25:43.815 07:23:07 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:25:43.815 07:23:07 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67849 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67849 ']' 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67849 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67849 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.815 killing process with pid 67849 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67849' 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67849 00:25:43.815 07:23:07 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67849 00:25:47.114 ************************************ 00:25:47.114 END TEST nvme_rpc 00:25:47.114 ************************************ 00:25:47.114 00:25:47.114 real 0m5.403s 00:25:47.114 user 0m10.251s 00:25:47.114 sys 0m0.776s 00:25:47.114 07:23:10 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.114 07:23:10 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:47.114 07:23:10 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:25:47.114 07:23:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:47.114 07:23:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.114 07:23:10 -- common/autotest_common.sh@10 -- # set +x 00:25:47.114 ************************************ 00:25:47.114 START TEST nvme_rpc_timeouts 00:25:47.114 ************************************ 00:25:47.114 07:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:25:47.114 * Looking for test storage... 00:25:47.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:25:47.114 07:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.114 07:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.114 07:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.114 07:23:10 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.114 07:23:10 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.114 07:23:11 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:25:47.114 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.114 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.114 --rc genhtml_branch_coverage=1 00:25:47.114 --rc genhtml_function_coverage=1 00:25:47.114 --rc genhtml_legend=1 00:25:47.114 --rc geninfo_all_blocks=1 00:25:47.114 --rc geninfo_unexecuted_blocks=1 00:25:47.114 00:25:47.114 ' 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.115 --rc genhtml_branch_coverage=1 00:25:47.115 --rc genhtml_function_coverage=1 00:25:47.115 --rc genhtml_legend=1 00:25:47.115 --rc geninfo_all_blocks=1 00:25:47.115 --rc geninfo_unexecuted_blocks=1 00:25:47.115 00:25:47.115 ' 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.115 --rc genhtml_branch_coverage=1 00:25:47.115 --rc genhtml_function_coverage=1 00:25:47.115 --rc genhtml_legend=1 00:25:47.115 --rc geninfo_all_blocks=1 00:25:47.115 --rc geninfo_unexecuted_blocks=1 00:25:47.115 00:25:47.115 ' 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.115 --rc genhtml_branch_coverage=1 00:25:47.115 --rc genhtml_function_coverage=1 00:25:47.115 --rc genhtml_legend=1 00:25:47.115 --rc geninfo_all_blocks=1 00:25:47.115 --rc geninfo_unexecuted_blocks=1 00:25:47.115 00:25:47.115 ' 00:25:47.115 07:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:47.115 07:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67936 00:25:47.115 07:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67936 00:25:47.115 07:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67968 00:25:47.115 07:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:25:47.115 07:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:25:47.115 07:23:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67968 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67968 ']' 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.115 07:23:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:25:47.115 [2024-11-20 07:23:11.150944] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:25:47.115 [2024-11-20 07:23:11.151097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67968 ] 00:25:47.373 [2024-11-20 07:23:11.331876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:47.373 [2024-11-20 07:23:11.487192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.373 [2024-11-20 07:23:11.487209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.766 07:23:12 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.766 07:23:12 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:25:48.766 Checking default timeout settings: 00:25:48.766 07:23:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:25:48.766 07:23:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:49.030 Making settings changes with rpc: 00:25:49.030 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:25:49.030 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:25:49.289 Check default vs. modified settings: 00:25:49.289 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:25:49.289 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67936 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67936 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:25:49.547 Setting action_on_timeout is changed as expected. 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67936 00:25:49.547 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67936 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:25:49.805 Setting timeout_us is changed as expected. 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67936 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67936 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:25:49.805 Setting timeout_admin_us is changed as expected. 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:25:49.805 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67936 /tmp/settings_modified_67936 00:25:49.806 07:23:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67968 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67968 ']' 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67968 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67968 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.806 killing process with pid 67968 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67968' 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67968 00:25:49.806 07:23:13 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67968 00:25:53.088 RPC TIMEOUT SETTING TEST PASSED. 00:25:53.088 07:23:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:25:53.088 00:25:53.088 real 0m5.847s 00:25:53.088 user 0m11.491s 00:25:53.088 sys 0m0.782s 00:25:53.088 07:23:16 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.088 07:23:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:25:53.088 ************************************ 00:25:53.088 END TEST nvme_rpc_timeouts 00:25:53.088 ************************************ 00:25:53.088 07:23:16 -- spdk/autotest.sh@239 -- # uname -s 00:25:53.088 07:23:16 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:25:53.088 07:23:16 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:25:53.088 07:23:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.088 07:23:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.088 07:23:16 -- common/autotest_common.sh@10 -- # set +x 00:25:53.088 ************************************ 00:25:53.088 START TEST sw_hotplug 00:25:53.088 ************************************ 00:25:53.088 07:23:16 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:25:53.088 * Looking for test storage... 00:25:53.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:25:53.088 07:23:16 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.088 07:23:16 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.088 07:23:16 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.088 07:23:16 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.088 07:23:16 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:25:53.089 07:23:16 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.089 07:23:16 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.089 07:23:16 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.089 07:23:16 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:25:53.089 07:23:16 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.089 07:23:16 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.089 --rc genhtml_branch_coverage=1 00:25:53.089 --rc genhtml_function_coverage=1 00:25:53.089 --rc genhtml_legend=1 00:25:53.089 --rc geninfo_all_blocks=1 00:25:53.089 --rc geninfo_unexecuted_blocks=1 00:25:53.089 00:25:53.089 ' 00:25:53.089 07:23:16 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.089 --rc genhtml_branch_coverage=1 00:25:53.089 --rc genhtml_function_coverage=1 00:25:53.089 --rc genhtml_legend=1 00:25:53.089 --rc geninfo_all_blocks=1 00:25:53.089 --rc geninfo_unexecuted_blocks=1 00:25:53.089 00:25:53.089 ' 00:25:53.089 07:23:16 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.089 --rc genhtml_branch_coverage=1 00:25:53.089 --rc genhtml_function_coverage=1 00:25:53.089 --rc genhtml_legend=1 00:25:53.089 --rc geninfo_all_blocks=1 00:25:53.089 --rc geninfo_unexecuted_blocks=1 00:25:53.089 00:25:53.089 ' 00:25:53.089 07:23:16 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.089 --rc genhtml_branch_coverage=1 00:25:53.089 --rc genhtml_function_coverage=1 00:25:53.089 --rc genhtml_legend=1 00:25:53.089 --rc geninfo_all_blocks=1 00:25:53.089 --rc geninfo_unexecuted_blocks=1 00:25:53.089 00:25:53.089 ' 00:25:53.089 07:23:16 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:53.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:53.349 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:53.349 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:53.349 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:53.349 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:53.349 07:23:17 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:25:53.349 07:23:17 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:25:53.349 07:23:17 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:25:53.349 07:23:17 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@233 -- # local class 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@18 -- # local i 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@18 -- # local i 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@18 -- # local i 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@18 -- # local i 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:53.349 07:23:17 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:25:53.633 07:23:17 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:53.633 07:23:17 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:25:53.633 07:23:17 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:25:53.634 07:23:17 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:53.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:53.892 Waiting for block devices as requested 00:25:54.150 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:54.150 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:54.150 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:54.408 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:59.753 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:59.753 07:23:23 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:25:59.753 07:23:23 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:00.012 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:26:00.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:00.012 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:26:00.270 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:26:00.529 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:00.529 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:00.529 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:26:00.529 07:23:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68857 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:26:00.787 07:23:24 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:26:00.787 07:23:24 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:26:00.787 07:23:24 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:26:00.787 07:23:24 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:26:00.787 07:23:24 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:26:00.787 07:23:24 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:26:01.045 Initializing NVMe Controllers 00:26:01.045 Attaching to 0000:00:10.0 00:26:01.045 Attaching to 0000:00:11.0 00:26:01.045 Attached to 0000:00:10.0 00:26:01.045 Attached to 0000:00:11.0 00:26:01.045 Initialization complete. Starting I/O... 00:26:01.045 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:26:01.045 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:26:01.045 00:26:01.982 QEMU NVMe Ctrl (12340 ): 1188 I/Os completed (+1188) 00:26:01.982 QEMU NVMe Ctrl (12341 ): 1232 I/Os completed (+1232) 00:26:01.982 00:26:02.934 QEMU NVMe Ctrl (12340 ): 2521 I/Os completed (+1333) 00:26:02.934 QEMU NVMe Ctrl (12341 ): 2612 I/Os completed (+1380) 00:26:02.934 00:26:04.307 QEMU NVMe Ctrl (12340 ): 3952 I/Os completed (+1431) 00:26:04.307 QEMU NVMe Ctrl (12341 ): 4065 I/Os completed (+1453) 00:26:04.307 00:26:04.874 QEMU NVMe Ctrl (12340 ): 5394 I/Os completed (+1442) 00:26:04.874 QEMU NVMe Ctrl (12341 ): 5521 I/Os completed (+1456) 00:26:04.874 00:26:06.254 QEMU NVMe Ctrl (12340 ): 6664 I/Os completed (+1270) 00:26:06.254 QEMU NVMe Ctrl (12341 ): 6916 I/Os completed (+1395) 00:26:06.254 00:26:06.822 07:23:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:06.822 07:23:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:06.822 07:23:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:06.822 [2024-11-20 07:23:30.831433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:26:06.822 Controller removed: QEMU NVMe Ctrl (12340 ) 00:26:06.822 [2024-11-20 07:23:30.835806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.835955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.836008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.836061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:26:06.822 [2024-11-20 07:23:30.842066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.842157] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.842195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.842225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 07:23:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:06.822 07:23:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:06.822 [2024-11-20 07:23:30.870981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:26:06.822 Controller removed: QEMU NVMe Ctrl (12341 ) 00:26:06.822 [2024-11-20 07:23:30.873588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.873697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.873745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.873783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:26:06.822 [2024-11-20 07:23:30.877962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.878040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.878074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 [2024-11-20 07:23:30.878100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:06.822 07:23:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:26:06.822 07:23:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:06.822 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:26:06.822 EAL: Scan for (pci) bus failed. 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:07.081 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:07.081 Attaching to 0000:00:10.0 00:26:07.081 Attached to 0000:00:10.0 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:07.081 07:23:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:07.081 Attaching to 0000:00:11.0 00:26:07.081 Attached to 0000:00:11.0 00:26:08.017 QEMU NVMe Ctrl (12340 ): 1174 I/Os completed (+1174) 00:26:08.017 QEMU NVMe Ctrl (12341 ): 1108 I/Os completed (+1108) 00:26:08.017 00:26:08.953 QEMU NVMe Ctrl (12340 ): 2496 I/Os completed (+1322) 00:26:08.953 QEMU NVMe Ctrl (12341 ): 2417 I/Os completed (+1309) 00:26:08.953 00:26:09.903 QEMU NVMe Ctrl (12340 ): 3772 I/Os completed (+1276) 00:26:09.903 QEMU NVMe Ctrl (12341 ): 3704 I/Os completed (+1287) 00:26:09.903 00:26:11.278 QEMU NVMe Ctrl (12340 ): 4988 I/Os completed (+1216) 00:26:11.278 QEMU NVMe Ctrl (12341 ): 5058 I/Os completed (+1354) 00:26:11.278 00:26:12.214 QEMU NVMe Ctrl (12340 ): 6329 I/Os completed (+1341) 00:26:12.214 QEMU NVMe Ctrl (12341 ): 6468 I/Os completed (+1410) 00:26:12.214 00:26:13.151 QEMU NVMe Ctrl (12340 ): 7571 I/Os completed (+1242) 00:26:13.151 QEMU NVMe Ctrl (12341 ): 7887 I/Os completed (+1419) 00:26:13.151 00:26:14.087 QEMU NVMe Ctrl (12340 ): 8798 I/Os completed (+1227) 00:26:14.087 QEMU NVMe Ctrl (12341 ): 9129 I/Os completed (+1242) 00:26:14.087 00:26:15.023 QEMU NVMe Ctrl (12340 ): 10015 I/Os completed (+1217) 00:26:15.023 QEMU NVMe Ctrl (12341 ): 10371 I/Os completed (+1242) 00:26:15.023 00:26:15.958 QEMU NVMe Ctrl (12340 ): 11171 I/Os completed (+1156) 00:26:15.958 QEMU NVMe Ctrl (12341 ): 11606 I/Os completed (+1235) 00:26:15.958 00:26:16.891 QEMU NVMe Ctrl (12340 ): 12444 I/Os completed (+1273) 00:26:16.891 QEMU NVMe Ctrl (12341 ): 12886 I/Os completed (+1280) 00:26:16.891 00:26:18.291 QEMU NVMe Ctrl (12340 ): 13671 I/Os completed (+1227) 00:26:18.291 QEMU NVMe Ctrl (12341 ): 14177 I/Os completed (+1291) 00:26:18.291 00:26:19.227 QEMU NVMe Ctrl (12340 ): 14884 I/Os completed (+1213) 00:26:19.227 QEMU NVMe Ctrl (12341 ): 15514 I/Os completed (+1337) 00:26:19.227 00:26:19.227 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:26:19.227 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:19.227 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:19.227 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:19.227 [2024-11-20 07:23:43.274262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:26:19.227 Controller removed: QEMU NVMe Ctrl (12340 ) 00:26:19.227 [2024-11-20 07:23:43.276948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.277030] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.277064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.277099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:26:19.227 [2024-11-20 07:23:43.281662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.281745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.281773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.281804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:19.227 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:19.227 [2024-11-20 07:23:43.326845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:26:19.227 Controller removed: QEMU NVMe Ctrl (12341 ) 00:26:19.227 [2024-11-20 07:23:43.329376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.329455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.329499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.329529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:26:19.227 [2024-11-20 07:23:43.334319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.334406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.334445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 [2024-11-20 07:23:43.334489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:19.227 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:26:19.227 EAL: Scan for (pci) bus failed. 00:26:19.227 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:26:19.227 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:19.486 Attaching to 0000:00:10.0 00:26:19.486 Attached to 0000:00:10.0 00:26:19.486 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:19.744 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:19.744 07:23:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:19.744 Attaching to 0000:00:11.0 00:26:19.744 Attached to 0000:00:11.0 00:26:20.002 QEMU NVMe Ctrl (12340 ): 572 I/Os completed (+572) 00:26:20.002 QEMU NVMe Ctrl (12341 ): 432 I/Os completed (+432) 00:26:20.002 00:26:20.936 QEMU NVMe Ctrl (12340 ): 1856 I/Os completed (+1284) 00:26:20.936 QEMU NVMe Ctrl (12341 ): 1714 I/Os completed (+1282) 00:26:20.936 00:26:21.881 QEMU NVMe Ctrl (12340 ): 3028 I/Os completed (+1172) 00:26:21.881 QEMU NVMe Ctrl (12341 ): 2947 I/Os completed (+1233) 00:26:21.881 00:26:23.257 QEMU NVMe Ctrl (12340 ): 4167 I/Os completed (+1139) 00:26:23.257 QEMU NVMe Ctrl (12341 ): 4274 I/Os completed (+1327) 00:26:23.257 00:26:24.284 QEMU NVMe Ctrl (12340 ): 5344 I/Os completed (+1177) 00:26:24.284 QEMU NVMe Ctrl (12341 ): 5486 I/Os completed (+1212) 00:26:24.284 00:26:25.229 QEMU NVMe Ctrl (12340 ): 6890 I/Os completed (+1546) 00:26:25.229 QEMU NVMe Ctrl (12341 ): 7047 I/Os completed (+1561) 00:26:25.229 00:26:26.165 QEMU NVMe Ctrl (12340 ): 8474 I/Os completed (+1584) 00:26:26.165 QEMU NVMe Ctrl (12341 ): 8667 I/Os completed (+1620) 00:26:26.165 00:26:27.104 QEMU NVMe Ctrl (12340 ): 10150 I/Os completed (+1676) 00:26:27.104 QEMU NVMe Ctrl (12341 ): 10347 I/Os completed (+1680) 00:26:27.104 00:26:28.041 QEMU NVMe Ctrl (12340 ): 11994 I/Os completed (+1844) 00:26:28.041 QEMU NVMe Ctrl (12341 ): 12191 I/Os completed (+1844) 00:26:28.041 00:26:28.979 QEMU NVMe Ctrl (12340 ): 13670 I/Os completed (+1676) 00:26:28.979 QEMU NVMe Ctrl (12341 ): 13872 I/Os completed (+1681) 00:26:28.979 00:26:29.917 QEMU NVMe Ctrl (12340 ): 15094 I/Os completed (+1424) 00:26:29.917 QEMU NVMe Ctrl (12341 ): 15302 I/Os completed (+1430) 00:26:29.917 00:26:31.296 QEMU NVMe Ctrl (12340 ): 16502 I/Os completed (+1408) 00:26:31.296 QEMU NVMe Ctrl (12341 ): 16718 I/Os completed (+1416) 00:26:31.296 00:26:31.556 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:26:31.556 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:31.556 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:31.556 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:31.556 [2024-11-20 07:23:55.719331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:26:31.556 Controller removed: QEMU NVMe Ctrl (12340 ) 00:26:31.556 [2024-11-20 07:23:55.722222] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.556 [2024-11-20 07:23:55.722302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.556 [2024-11-20 07:23:55.722333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.556 [2024-11-20 07:23:55.722367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.556 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:26:31.556 [2024-11-20 07:23:55.726085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.556 [2024-11-20 07:23:55.726171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.556 [2024-11-20 07:23:55.726210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.556 [2024-11-20 07:23:55.726250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.556 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:31.556 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:31.815 [2024-11-20 07:23:55.763457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:26:31.815 Controller removed: QEMU NVMe Ctrl (12341 ) 00:26:31.815 [2024-11-20 07:23:55.767929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.815 [2024-11-20 07:23:55.768000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.815 [2024-11-20 07:23:55.768031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.815 [2024-11-20 07:23:55.768052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.815 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:26:31.815 [2024-11-20 07:23:55.771186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.815 [2024-11-20 07:23:55.771241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.815 [2024-11-20 07:23:55.771289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.815 [2024-11-20 07:23:55.771311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:31.815 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:26:31.815 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:31.815 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:26:31.815 EAL: Scan for (pci) bus failed. 00:26:31.815 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:31.815 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:31.815 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:31.815 07:23:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:32.074 07:23:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:32.074 07:23:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:32.074 07:23:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:32.074 07:23:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:32.074 Attaching to 0000:00:10.0 00:26:32.074 Attached to 0000:00:10.0 00:26:32.074 QEMU NVMe Ctrl (12340 ): 88 I/Os completed (+88) 00:26:32.074 00:26:32.074 07:23:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:32.074 07:23:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:32.074 07:23:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:32.074 Attaching to 0000:00:11.0 00:26:32.074 Attached to 0000:00:11.0 00:26:32.074 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:26:32.074 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:26:32.074 [2024-11-20 07:23:56.137936] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:26:44.281 07:24:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:26:44.281 07:24:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:44.281 07:24:08 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.30 00:26:44.281 07:24:08 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.30 00:26:44.281 07:24:08 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:26:44.281 07:24:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.30 00:26:44.281 07:24:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.30 2 00:26:44.281 remove_attach_helper took 43.30s to complete (handling 2 nvme drive(s)) 07:24:08 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68857 00:26:50.844 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68857) - No such process 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68857 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69401 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69401 00:26:50.844 07:24:14 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69401 ']' 00:26:50.844 07:24:14 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.844 07:24:14 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:50.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.844 07:24:14 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.844 07:24:14 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:50.844 07:24:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:50.844 07:24:14 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:50.844 [2024-11-20 07:24:14.284633] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:26:50.844 [2024-11-20 07:24:14.284867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69401 ] 00:26:50.844 [2024-11-20 07:24:14.492336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.844 [2024-11-20 07:24:14.667114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:26:51.779 07:24:15 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:26:51.779 07:24:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:58.336 07:24:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.336 07:24:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:58.336 [2024-11-20 07:24:21.739943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:26:58.336 [2024-11-20 07:24:21.742465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:58.336 [2024-11-20 07:24:21.742510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.336 [2024-11-20 07:24:21.742532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.336 [2024-11-20 07:24:21.742558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:58.336 [2024-11-20 07:24:21.742571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.336 [2024-11-20 07:24:21.742586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.336 [2024-11-20 07:24:21.742600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:58.336 [2024-11-20 07:24:21.742613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.336 [2024-11-20 07:24:21.742625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.336 [2024-11-20 07:24:21.742645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:58.336 [2024-11-20 07:24:21.742658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.336 [2024-11-20 07:24:21.742672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.336 07:24:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:26:58.336 07:24:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:26:58.336 [2024-11-20 07:24:22.139979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:26:58.336 [2024-11-20 07:24:22.142931] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:58.336 [2024-11-20 07:24:22.142993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.336 [2024-11-20 07:24:22.143015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.336 [2024-11-20 07:24:22.143041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:58.336 [2024-11-20 07:24:22.143056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.336 [2024-11-20 07:24:22.143070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.336 [2024-11-20 07:24:22.143088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:58.336 [2024-11-20 07:24:22.143100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.336 [2024-11-20 07:24:22.143116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.336 [2024-11-20 07:24:22.143130] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:58.336 [2024-11-20 07:24:22.143145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:58.336 [2024-11-20 07:24:22.143158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:58.336 07:24:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.336 07:24:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:58.336 07:24:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:58.336 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:58.595 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:58.595 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:58.595 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:58.595 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:58.595 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:58.595 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:58.595 07:24:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:27:10.793 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:10.794 07:24:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.794 07:24:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:10.794 07:24:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:10.794 [2024-11-20 07:24:34.740252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:27:10.794 [2024-11-20 07:24:34.743376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:10.794 [2024-11-20 07:24:34.743428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.794 [2024-11-20 07:24:34.743446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.794 [2024-11-20 07:24:34.743474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:10.794 [2024-11-20 07:24:34.743488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.794 [2024-11-20 07:24:34.743506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.794 [2024-11-20 07:24:34.743521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:10.794 [2024-11-20 07:24:34.743536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.794 [2024-11-20 07:24:34.743549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.794 [2024-11-20 07:24:34.743565] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:10.794 [2024-11-20 07:24:34.743577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:10.794 [2024-11-20 07:24:34.743593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:10.794 07:24:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.794 07:24:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:10.794 07:24:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:27:10.794 07:24:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:27:11.052 [2024-11-20 07:24:35.140215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:27:11.052 [2024-11-20 07:24:35.142954] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:11.052 [2024-11-20 07:24:35.142998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.052 [2024-11-20 07:24:35.143024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.052 [2024-11-20 07:24:35.143049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:11.052 [2024-11-20 07:24:35.143065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.052 [2024-11-20 07:24:35.143079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.052 [2024-11-20 07:24:35.143097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:11.052 [2024-11-20 07:24:35.143110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.052 [2024-11-20 07:24:35.143126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.052 [2024-11-20 07:24:35.143141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:11.052 [2024-11-20 07:24:35.143156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.052 [2024-11-20 07:24:35.143169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:11.310 07:24:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:11.310 07:24:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:11.310 07:24:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:11.310 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:27:11.567 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:27:11.567 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:11.567 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:11.567 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:11.567 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:27:11.567 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:27:11.567 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:11.567 07:24:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:23.775 07:24:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.775 07:24:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:23.775 07:24:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:23.775 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:23.775 [2024-11-20 07:24:47.840482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:27:23.775 [2024-11-20 07:24:47.844033] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:23.775 [2024-11-20 07:24:47.844092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.775 [2024-11-20 07:24:47.844113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.775 [2024-11-20 07:24:47.844147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:23.775 [2024-11-20 07:24:47.844162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.775 [2024-11-20 07:24:47.844188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.776 [2024-11-20 07:24:47.844204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:23.776 [2024-11-20 07:24:47.844223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.776 [2024-11-20 07:24:47.844237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.776 [2024-11-20 07:24:47.844258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:23.776 [2024-11-20 07:24:47.844271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:23.776 [2024-11-20 07:24:47.844290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:23.776 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:27:23.776 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:23.776 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:23.776 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:23.776 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:23.776 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:23.776 07:24:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.776 07:24:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:23.776 07:24:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.776 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:27:23.776 07:24:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:27:24.342 [2024-11-20 07:24:48.240525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:27:24.342 [2024-11-20 07:24:48.244218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:24.342 [2024-11-20 07:24:48.244275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.342 [2024-11-20 07:24:48.244302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.342 [2024-11-20 07:24:48.244330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:24.342 [2024-11-20 07:24:48.244350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.342 [2024-11-20 07:24:48.244365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.342 [2024-11-20 07:24:48.244387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:24.342 [2024-11-20 07:24:48.244401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.342 [2024-11-20 07:24:48.244426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.342 [2024-11-20 07:24:48.244442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:24.342 [2024-11-20 07:24:48.244461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.342 [2024-11-20 07:24:48.244475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.342 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:27:24.342 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:24.342 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:24.342 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:24.342 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:24.342 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:24.342 07:24:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.342 07:24:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:24.342 07:24:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.342 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:27:24.342 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:24.600 07:24:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.19 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.19 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:27:36.797 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:27:36.797 07:25:00 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:27:36.797 07:25:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:43.390 07:25:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.390 07:25:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:43.390 07:25:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:43.390 [2024-11-20 07:25:06.968738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:27:43.390 [2024-11-20 07:25:06.971616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:43.390 [2024-11-20 07:25:06.971686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.390 [2024-11-20 07:25:06.971713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.390 [2024-11-20 07:25:06.971745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:43.390 [2024-11-20 07:25:06.971759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.390 [2024-11-20 07:25:06.971779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.390 [2024-11-20 07:25:06.971795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:43.390 [2024-11-20 07:25:06.971824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.390 [2024-11-20 07:25:06.971839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.390 [2024-11-20 07:25:06.971864] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:43.390 [2024-11-20 07:25:06.971877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.390 [2024-11-20 07:25:06.971897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.390 07:25:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:43.390 07:25:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.390 07:25:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:43.390 07:25:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:27:43.390 07:25:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:27:43.649 [2024-11-20 07:25:07.668776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:27:43.649 [2024-11-20 07:25:07.670856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:43.649 [2024-11-20 07:25:07.670903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.649 [2024-11-20 07:25:07.670928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.649 [2024-11-20 07:25:07.670954] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:43.649 [2024-11-20 07:25:07.670973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.649 [2024-11-20 07:25:07.670986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.649 [2024-11-20 07:25:07.671006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:43.649 [2024-11-20 07:25:07.671019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.649 [2024-11-20 07:25:07.671037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.649 [2024-11-20 07:25:07.671052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:43.649 [2024-11-20 07:25:07.671073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.649 [2024-11-20 07:25:07.671088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.909 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:27:43.909 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:43.909 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:43.909 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:43.909 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:43.909 07:25:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.909 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:43.909 07:25:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:43.909 07:25:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.909 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:27:43.909 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:27:44.167 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:44.167 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:44.167 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:27:44.167 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:27:44.167 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:44.167 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:44.167 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:44.167 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:27:44.426 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:27:44.426 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:44.426 07:25:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:56.632 07:25:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.632 07:25:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:56.632 07:25:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:56.632 [2024-11-20 07:25:20.569104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:27:56.632 [2024-11-20 07:25:20.571517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.632 [2024-11-20 07:25:20.571577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.632 [2024-11-20 07:25:20.571599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.632 [2024-11-20 07:25:20.571635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.632 [2024-11-20 07:25:20.571650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.632 [2024-11-20 07:25:20.571668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.632 [2024-11-20 07:25:20.571685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.632 [2024-11-20 07:25:20.571702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.632 [2024-11-20 07:25:20.571716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.632 [2024-11-20 07:25:20.571735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.632 [2024-11-20 07:25:20.571749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.632 [2024-11-20 07:25:20.571765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:56.632 07:25:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.632 07:25:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:56.632 07:25:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:27:56.632 07:25:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:27:57.202 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:27:57.202 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:57.202 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:57.202 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:57.202 07:25:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.202 07:25:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:57.202 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:57.202 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:57.202 07:25:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.202 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:27:57.202 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:27:57.202 [2024-11-20 07:25:21.269134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:27:57.202 [2024-11-20 07:25:21.271641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:57.202 [2024-11-20 07:25:21.271695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.202 [2024-11-20 07:25:21.271720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-11-20 07:25:21.271754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:57.202 [2024-11-20 07:25:21.271776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.202 [2024-11-20 07:25:21.271791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-11-20 07:25:21.271824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:57.202 [2024-11-20 07:25:21.271838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.202 [2024-11-20 07:25:21.271855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.202 [2024-11-20 07:25:21.271872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:57.202 [2024-11-20 07:25:21.271887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.202 [2024-11-20 07:25:21.271901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:57.771 07:25:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.771 07:25:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:57.771 07:25:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:27:57.771 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:27:58.030 07:25:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:58.030 07:25:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:58.030 07:25:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:58.030 07:25:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:27:58.030 07:25:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:27:58.030 07:25:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:58.030 07:25:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:10.236 07:25:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.236 07:25:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:10.236 07:25:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:10.236 [2024-11-20 07:25:34.169452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:28:10.236 [2024-11-20 07:25:34.172087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:10.236 [2024-11-20 07:25:34.172156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.236 [2024-11-20 07:25:34.172179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-11-20 07:25:34.172212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:10.236 [2024-11-20 07:25:34.172228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.236 [2024-11-20 07:25:34.172252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-11-20 07:25:34.172278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:10.236 [2024-11-20 07:25:34.172303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.236 [2024-11-20 07:25:34.172317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 [2024-11-20 07:25:34.172346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:10.236 [2024-11-20 07:25:34.172364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.236 [2024-11-20 07:25:34.172381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:10.236 07:25:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.236 07:25:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:10.236 07:25:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:28:10.236 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:28:10.529 [2024-11-20 07:25:34.569461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:28:10.529 [2024-11-20 07:25:34.571675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:10.529 [2024-11-20 07:25:34.571728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.529 [2024-11-20 07:25:34.571751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.529 [2024-11-20 07:25:34.571779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:10.529 [2024-11-20 07:25:34.571796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.529 [2024-11-20 07:25:34.571810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.529 [2024-11-20 07:25:34.571842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:10.529 [2024-11-20 07:25:34.571856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.529 [2024-11-20 07:25:34.571873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.529 [2024-11-20 07:25:34.571889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:10.529 [2024-11-20 07:25:34.571910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:28:10.529 [2024-11-20 07:25:34.571924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:10.803 07:25:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.803 07:25:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:10.803 07:25:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:10.803 07:25:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:28:11.063 07:25:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:28:11.063 07:25:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:11.063 07:25:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:11.063 07:25:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:11.063 07:25:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:28:11.063 07:25:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:28:11.063 07:25:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:11.063 07:25:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.30 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.30 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.30 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.30 2 00:28:23.269 remove_attach_helper took 46.30s to complete (handling 2 nvme drive(s)) 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:28:23.269 07:25:47 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69401 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69401 ']' 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69401 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69401 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.269 killing process with pid 69401 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69401' 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69401 00:28:23.269 07:25:47 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69401 00:28:25.817 07:25:49 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:26.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:26.646 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:26.646 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:26.907 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:28:26.907 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:28:26.907 00:28:26.907 real 2m34.289s 00:28:26.907 user 1m52.653s 00:28:26.907 sys 0m22.201s 00:28:26.907 07:25:51 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.907 07:25:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:26.907 ************************************ 00:28:26.907 END TEST sw_hotplug 00:28:26.907 ************************************ 00:28:26.907 07:25:51 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:28:26.907 07:25:51 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:28:26.907 07:25:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:26.907 07:25:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.907 07:25:51 -- common/autotest_common.sh@10 -- # set +x 00:28:26.907 ************************************ 00:28:26.907 START TEST nvme_xnvme 00:28:26.907 ************************************ 00:28:26.907 07:25:51 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:28:27.169 * Looking for test storage... 00:28:27.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:28:27.169 07:25:51 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:27.169 07:25:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:28:27.169 07:25:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:27.169 07:25:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.169 07:25:51 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:28:27.169 07:25:51 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.169 07:25:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:27.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.169 --rc genhtml_branch_coverage=1 00:28:27.169 --rc genhtml_function_coverage=1 00:28:27.169 --rc genhtml_legend=1 00:28:27.169 --rc geninfo_all_blocks=1 00:28:27.169 --rc geninfo_unexecuted_blocks=1 00:28:27.169 00:28:27.169 ' 00:28:27.170 07:25:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:27.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.170 --rc genhtml_branch_coverage=1 00:28:27.170 --rc genhtml_function_coverage=1 00:28:27.170 --rc genhtml_legend=1 00:28:27.170 --rc geninfo_all_blocks=1 00:28:27.170 --rc geninfo_unexecuted_blocks=1 00:28:27.170 00:28:27.170 ' 00:28:27.170 07:25:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:27.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.170 --rc genhtml_branch_coverage=1 00:28:27.170 --rc genhtml_function_coverage=1 00:28:27.170 --rc genhtml_legend=1 00:28:27.170 --rc geninfo_all_blocks=1 00:28:27.170 --rc geninfo_unexecuted_blocks=1 00:28:27.170 00:28:27.170 ' 00:28:27.170 07:25:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:27.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.170 --rc genhtml_branch_coverage=1 00:28:27.170 --rc genhtml_function_coverage=1 00:28:27.170 --rc genhtml_legend=1 00:28:27.170 --rc geninfo_all_blocks=1 00:28:27.170 --rc geninfo_unexecuted_blocks=1 00:28:27.170 00:28:27.170 ' 00:28:27.170 07:25:51 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:27.170 07:25:51 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.170 07:25:51 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.170 07:25:51 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.170 07:25:51 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.170 07:25:51 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.170 07:25:51 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.170 07:25:51 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.170 07:25:51 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:28:27.170 07:25:51 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.170 07:25:51 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:28:27.170 07:25:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:27.170 07:25:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.170 07:25:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:27.170 ************************************ 00:28:27.170 START TEST xnvme_to_malloc_dd_copy 00:28:27.170 ************************************ 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:28:27.170 07:25:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:28:27.430 { 00:28:27.430 "subsystems": [ 00:28:27.430 { 00:28:27.430 "subsystem": "bdev", 00:28:27.430 "config": [ 00:28:27.430 { 00:28:27.430 "params": { 00:28:27.430 "block_size": 512, 00:28:27.430 "num_blocks": 2097152, 00:28:27.430 "name": "malloc0" 00:28:27.430 }, 00:28:27.430 "method": "bdev_malloc_create" 00:28:27.430 }, 00:28:27.430 { 00:28:27.430 "params": { 00:28:27.430 "io_mechanism": "libaio", 00:28:27.430 "filename": "/dev/nullb0", 00:28:27.430 "name": "null0" 00:28:27.430 }, 00:28:27.430 "method": "bdev_xnvme_create" 00:28:27.430 }, 00:28:27.430 { 00:28:27.430 "method": "bdev_wait_for_examine" 00:28:27.430 } 00:28:27.430 ] 00:28:27.430 } 00:28:27.430 ] 00:28:27.430 } 00:28:27.430 [2024-11-20 07:25:51.446721] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:27.430 [2024-11-20 07:25:51.446947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70787 ] 00:28:27.689 [2024-11-20 07:25:51.651667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.689 [2024-11-20 07:25:51.827379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.979  [2024-11-20T07:25:55.751Z] Copying: 207/1024 [MB] (207 MBps) [2024-11-20T07:25:56.687Z] Copying: 416/1024 [MB] (208 MBps) [2024-11-20T07:25:57.624Z] Copying: 626/1024 [MB] (210 MBps) [2024-11-20T07:25:58.561Z] Copying: 838/1024 [MB] (211 MBps) [2024-11-20T07:26:03.850Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:28:39.647 00:28:39.647 07:26:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:28:39.647 07:26:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:28:39.647 07:26:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:28:39.647 07:26:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:28:39.647 { 00:28:39.647 "subsystems": [ 00:28:39.647 { 00:28:39.647 "subsystem": "bdev", 00:28:39.647 "config": [ 00:28:39.647 { 00:28:39.647 "params": { 00:28:39.647 "block_size": 512, 00:28:39.647 "num_blocks": 2097152, 00:28:39.647 "name": "malloc0" 00:28:39.647 }, 00:28:39.647 "method": "bdev_malloc_create" 00:28:39.647 }, 00:28:39.647 { 00:28:39.647 "params": { 00:28:39.647 "io_mechanism": "libaio", 00:28:39.647 "filename": "/dev/nullb0", 00:28:39.647 "name": "null0" 00:28:39.647 }, 00:28:39.647 "method": "bdev_xnvme_create" 00:28:39.647 }, 00:28:39.647 { 00:28:39.647 "method": "bdev_wait_for_examine" 00:28:39.647 } 00:28:39.647 ] 00:28:39.647 } 00:28:39.647 ] 00:28:39.647 } 00:28:39.647 [2024-11-20 07:26:03.136294] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:39.647 [2024-11-20 07:26:03.136538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70919 ] 00:28:39.647 [2024-11-20 07:26:03.327980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.647 [2024-11-20 07:26:03.476527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.181  [2024-11-20T07:26:07.322Z] Copying: 244/1024 [MB] (244 MBps) [2024-11-20T07:26:08.271Z] Copying: 481/1024 [MB] (236 MBps) [2024-11-20T07:26:09.207Z] Copying: 723/1024 [MB] (241 MBps) [2024-11-20T07:26:09.465Z] Copying: 961/1024 [MB] (238 MBps) [2024-11-20T07:26:14.745Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:28:50.542 00:28:50.542 07:26:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:28:50.542 07:26:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:28:50.542 07:26:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:28:50.542 07:26:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:28:50.542 07:26:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:28:50.542 07:26:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:28:50.542 { 00:28:50.542 "subsystems": [ 00:28:50.542 { 00:28:50.542 "subsystem": "bdev", 00:28:50.542 "config": [ 00:28:50.542 { 00:28:50.542 "params": { 00:28:50.542 "block_size": 512, 00:28:50.542 "num_blocks": 2097152, 00:28:50.542 "name": "malloc0" 00:28:50.542 }, 00:28:50.542 "method": "bdev_malloc_create" 00:28:50.542 }, 00:28:50.542 { 00:28:50.542 "params": { 00:28:50.542 "io_mechanism": "io_uring", 00:28:50.542 "filename": "/dev/nullb0", 00:28:50.542 "name": "null0" 00:28:50.542 }, 00:28:50.542 "method": "bdev_xnvme_create" 00:28:50.542 }, 00:28:50.542 { 00:28:50.542 "method": "bdev_wait_for_examine" 00:28:50.542 } 00:28:50.542 ] 00:28:50.542 } 00:28:50.542 ] 00:28:50.542 } 00:28:50.542 [2024-11-20 07:26:13.870224] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:28:50.542 [2024-11-20 07:26:13.870412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71036 ] 00:28:50.542 [2024-11-20 07:26:14.064281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.542 [2024-11-20 07:26:14.189625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.075  [2024-11-20T07:26:18.213Z] Copying: 217/1024 [MB] (217 MBps) [2024-11-20T07:26:19.149Z] Copying: 452/1024 [MB] (234 MBps) [2024-11-20T07:26:20.085Z] Copying: 678/1024 [MB] (226 MBps) [2024-11-20T07:26:20.653Z] Copying: 906/1024 [MB] (228 MBps) [2024-11-20T07:26:24.896Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:29:00.693 00:29:00.693 07:26:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:29:00.693 07:26:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:29:00.693 07:26:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:29:00.693 07:26:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:29:00.693 { 00:29:00.693 "subsystems": [ 00:29:00.693 { 00:29:00.693 "subsystem": "bdev", 00:29:00.693 "config": [ 00:29:00.693 { 00:29:00.693 "params": { 00:29:00.693 "block_size": 512, 00:29:00.693 "num_blocks": 2097152, 00:29:00.693 "name": "malloc0" 00:29:00.693 }, 00:29:00.693 "method": "bdev_malloc_create" 00:29:00.693 }, 00:29:00.693 { 00:29:00.693 "params": { 00:29:00.693 "io_mechanism": "io_uring", 00:29:00.693 "filename": "/dev/nullb0", 00:29:00.693 "name": "null0" 00:29:00.693 }, 00:29:00.693 "method": "bdev_xnvme_create" 00:29:00.693 }, 00:29:00.693 { 00:29:00.693 "method": "bdev_wait_for_examine" 00:29:00.693 } 00:29:00.693 ] 00:29:00.693 } 00:29:00.693 ] 00:29:00.693 } 00:29:00.693 [2024-11-20 07:26:24.762290] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:00.693 [2024-11-20 07:26:24.762493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:29:00.951 [2024-11-20 07:26:24.957733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.951 [2024-11-20 07:26:25.084572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.484  [2024-11-20T07:26:28.624Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-20T07:26:29.999Z] Copying: 490/1024 [MB] (242 MBps) [2024-11-20T07:26:30.933Z] Copying: 713/1024 [MB] (223 MBps) [2024-11-20T07:26:30.933Z] Copying: 963/1024 [MB] (250 MBps) [2024-11-20T07:26:35.121Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:29:10.918 00:29:11.180 07:26:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:29:11.180 07:26:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:29:11.180 00:29:11.180 real 0m43.889s 00:29:11.180 user 0m38.355s 00:29:11.180 sys 0m4.947s 00:29:11.180 07:26:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.180 07:26:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:29:11.180 ************************************ 00:29:11.180 END TEST xnvme_to_malloc_dd_copy 00:29:11.180 ************************************ 00:29:11.180 07:26:35 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:29:11.180 07:26:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:11.180 07:26:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.180 07:26:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:11.180 ************************************ 00:29:11.180 START TEST xnvme_bdevperf 00:29:11.180 ************************************ 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:11.180 07:26:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.180 { 00:29:11.180 "subsystems": [ 00:29:11.180 { 00:29:11.180 "subsystem": "bdev", 00:29:11.180 "config": [ 00:29:11.180 { 00:29:11.180 "params": { 00:29:11.180 "io_mechanism": "libaio", 00:29:11.180 "filename": "/dev/nullb0", 00:29:11.180 "name": "null0" 00:29:11.180 }, 00:29:11.180 "method": "bdev_xnvme_create" 00:29:11.180 }, 00:29:11.180 { 00:29:11.180 "method": "bdev_wait_for_examine" 00:29:11.180 } 00:29:11.180 ] 00:29:11.180 } 00:29:11.180 ] 00:29:11.180 } 00:29:11.180 [2024-11-20 07:26:35.354594] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:11.180 [2024-11-20 07:26:35.354761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71302 ] 00:29:11.439 [2024-11-20 07:26:35.549377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.697 [2024-11-20 07:26:35.709757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.040 Running I/O for 5 seconds... 00:29:13.912 147008.00 IOPS, 574.25 MiB/s [2024-11-20T07:26:39.490Z] 148608.00 IOPS, 580.50 MiB/s [2024-11-20T07:26:40.424Z] 149248.00 IOPS, 583.00 MiB/s [2024-11-20T07:26:41.361Z] 149232.00 IOPS, 582.94 MiB/s 00:29:17.158 Latency(us) 00:29:17.158 [2024-11-20T07:26:41.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.158 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:29:17.158 null0 : 5.00 148531.16 580.20 0.00 0.00 428.36 134.58 1966.08 00:29:17.158 [2024-11-20T07:26:41.361Z] =================================================================================================================== 00:29:17.158 [2024-11-20T07:26:41.361Z] Total : 148531.16 580.20 0.00 0.00 428.36 134.58 1966.08 00:29:18.098 07:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:29:18.098 07:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:29:18.098 07:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:29:18.098 07:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:29:18.098 07:26:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:29:18.098 07:26:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.356 { 00:29:18.356 "subsystems": [ 00:29:18.356 { 00:29:18.356 "subsystem": "bdev", 00:29:18.356 "config": [ 00:29:18.356 { 00:29:18.356 "params": { 00:29:18.356 "io_mechanism": "io_uring", 00:29:18.356 "filename": "/dev/nullb0", 00:29:18.356 "name": "null0" 00:29:18.356 }, 00:29:18.356 "method": "bdev_xnvme_create" 00:29:18.356 }, 00:29:18.356 { 00:29:18.356 "method": "bdev_wait_for_examine" 00:29:18.356 } 00:29:18.356 ] 00:29:18.356 } 00:29:18.356 ] 00:29:18.356 } 00:29:18.356 [2024-11-20 07:26:42.406943] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:18.356 [2024-11-20 07:26:42.407115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71383 ] 00:29:18.614 [2024-11-20 07:26:42.593898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.614 [2024-11-20 07:26:42.709132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.872 Running I/O for 5 seconds... 00:29:21.188 187968.00 IOPS, 734.25 MiB/s [2024-11-20T07:26:46.329Z] 181760.00 IOPS, 710.00 MiB/s [2024-11-20T07:26:47.268Z] 180224.00 IOPS, 704.00 MiB/s [2024-11-20T07:26:48.206Z] 180352.00 IOPS, 704.50 MiB/s [2024-11-20T07:26:48.206Z] 180812.80 IOPS, 706.30 MiB/s 00:29:24.003 Latency(us) 00:29:24.003 [2024-11-20T07:26:48.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.004 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:29:24.004 null0 : 5.00 180753.55 706.07 0.00 0.00 351.47 308.18 2028.50 00:29:24.004 [2024-11-20T07:26:48.207Z] =================================================================================================================== 00:29:24.004 [2024-11-20T07:26:48.207Z] Total : 180753.55 706.07 0.00 0.00 351.47 308.18 2028.50 00:29:25.415 07:26:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:29:25.415 07:26:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:29:25.415 00:29:25.415 real 0m14.099s 00:29:25.415 user 0m10.575s 00:29:25.415 sys 0m3.299s 00:29:25.415 07:26:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.415 07:26:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.415 ************************************ 00:29:25.415 END TEST xnvme_bdevperf 00:29:25.415 ************************************ 00:29:25.415 00:29:25.415 real 0m58.306s 00:29:25.415 user 0m49.084s 00:29:25.415 sys 0m8.418s 00:29:25.415 07:26:49 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.415 ************************************ 00:29:25.415 END TEST nvme_xnvme 00:29:25.415 ************************************ 00:29:25.415 07:26:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:25.415 07:26:49 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:29:25.415 07:26:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:25.415 07:26:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.415 07:26:49 -- common/autotest_common.sh@10 -- # set +x 00:29:25.415 ************************************ 00:29:25.415 START TEST blockdev_xnvme 00:29:25.415 ************************************ 00:29:25.415 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:29:25.415 * Looking for test storage... 00:29:25.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:25.415 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:25.415 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:29:25.415 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:25.674 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.674 07:26:49 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:29:25.674 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.674 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:25.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.674 --rc genhtml_branch_coverage=1 00:29:25.674 --rc genhtml_function_coverage=1 00:29:25.674 --rc genhtml_legend=1 00:29:25.674 --rc geninfo_all_blocks=1 00:29:25.674 --rc geninfo_unexecuted_blocks=1 00:29:25.674 00:29:25.674 ' 00:29:25.674 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:25.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.674 --rc genhtml_branch_coverage=1 00:29:25.674 --rc genhtml_function_coverage=1 00:29:25.674 --rc genhtml_legend=1 00:29:25.674 --rc geninfo_all_blocks=1 00:29:25.674 --rc geninfo_unexecuted_blocks=1 00:29:25.674 00:29:25.674 ' 00:29:25.674 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:25.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.674 --rc genhtml_branch_coverage=1 00:29:25.674 --rc genhtml_function_coverage=1 00:29:25.674 --rc genhtml_legend=1 00:29:25.674 --rc geninfo_all_blocks=1 00:29:25.674 --rc geninfo_unexecuted_blocks=1 00:29:25.674 00:29:25.674 ' 00:29:25.674 07:26:49 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:25.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.674 --rc genhtml_branch_coverage=1 00:29:25.674 --rc genhtml_function_coverage=1 00:29:25.674 --rc genhtml_legend=1 00:29:25.674 --rc geninfo_all_blocks=1 00:29:25.674 --rc geninfo_unexecuted_blocks=1 00:29:25.674 00:29:25.674 ' 00:29:25.674 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:25.674 07:26:49 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:29:25.674 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:25.674 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:25.674 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71536 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71536 00:29:25.675 07:26:49 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:25.675 07:26:49 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71536 ']' 00:29:25.675 07:26:49 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.675 07:26:49 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:25.675 07:26:49 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.675 07:26:49 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:25.675 07:26:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:25.675 [2024-11-20 07:26:49.796096] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:25.675 [2024-11-20 07:26:49.796277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71536 ] 00:29:25.936 [2024-11-20 07:26:49.988892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.195 [2024-11-20 07:26:50.152613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.131 07:26:51 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.131 07:26:51 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:29:27.131 07:26:51 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:29:27.131 07:26:51 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:29:27.131 07:26:51 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:29:27.131 07:26:51 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:29:27.131 07:26:51 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:27.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.649 Waiting for block devices as requested 00:29:27.649 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:27.649 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:27.908 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:27.908 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:33.178 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 nvme0n1 00:29:33.178 nvme1n1 00:29:33.178 nvme2n1 00:29:33.178 nvme2n2 00:29:33.178 nvme2n3 00:29:33.178 nvme3n1 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:29:33.178 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.178 07:26:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.437 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:29:33.437 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:29:33.437 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2554c906-02ff-4ad2-8b69-04dd5160b142"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2554c906-02ff-4ad2-8b69-04dd5160b142",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b8e34561-ad27-42e5-842d-b797fa775987"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b8e34561-ad27-42e5-842d-b797fa775987",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "5781d28c-bc88-4cc9-8211-1398a3b231f3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5781d28c-bc88-4cc9-8211-1398a3b231f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "242b3778-a219-4909-bbf2-24df3e54cfe7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "242b3778-a219-4909-bbf2-24df3e54cfe7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ce6ab382-d118-4270-8ff4-2ea7a4790349"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce6ab382-d118-4270-8ff4-2ea7a4790349",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0e165098-9eca-40e7-8ef5-ec32b9da3f50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0e165098-9eca-40e7-8ef5-ec32b9da3f50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:29:33.437 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:29:33.437 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:29:33.437 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:29:33.437 07:26:57 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71536 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71536 ']' 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71536 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71536 00:29:33.437 killing process with pid 71536 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71536' 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71536 00:29:33.437 07:26:57 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71536 00:29:35.969 07:27:00 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:35.969 07:27:00 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:29:35.969 07:27:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:35.969 07:27:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.969 07:27:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:36.228 ************************************ 00:29:36.228 START TEST bdev_hello_world 00:29:36.228 ************************************ 00:29:36.228 07:27:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:29:36.228 [2024-11-20 07:27:00.302542] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:36.228 [2024-11-20 07:27:00.302731] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71912 ] 00:29:36.486 [2024-11-20 07:27:00.503425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.486 [2024-11-20 07:27:00.680080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.059 [2024-11-20 07:27:01.193151] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:37.059 [2024-11-20 07:27:01.193216] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:29:37.059 [2024-11-20 07:27:01.193237] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:37.059 [2024-11-20 07:27:01.195728] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:37.059 [2024-11-20 07:27:01.196004] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:37.059 [2024-11-20 07:27:01.196025] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:37.059 [2024-11-20 07:27:01.196267] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:37.059 00:29:37.059 [2024-11-20 07:27:01.196289] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:38.437 00:29:38.437 real 0m2.296s 00:29:38.437 user 0m1.799s 00:29:38.437 sys 0m0.376s 00:29:38.437 ************************************ 00:29:38.437 END TEST bdev_hello_world 00:29:38.437 ************************************ 00:29:38.437 07:27:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.437 07:27:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:38.437 07:27:02 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:29:38.437 07:27:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:38.437 07:27:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.437 07:27:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:38.437 ************************************ 00:29:38.437 START TEST bdev_bounds 00:29:38.437 ************************************ 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:29:38.437 Process bdevio pid: 71960 00:29:38.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71960 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71960' 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71960 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71960 ']' 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.437 07:27:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:38.695 [2024-11-20 07:27:02.650372] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:38.695 [2024-11-20 07:27:02.651022] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71960 ] 00:29:38.695 [2024-11-20 07:27:02.842048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:38.955 [2024-11-20 07:27:03.005324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.955 [2024-11-20 07:27:03.005462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.955 [2024-11-20 07:27:03.005470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.522 07:27:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.523 07:27:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:29:39.523 07:27:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:39.781 I/O targets: 00:29:39.781 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:39.781 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:39.781 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:39.781 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:39.781 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:39.781 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:39.781 00:29:39.781 00:29:39.781 CUnit - A unit testing framework for C - Version 2.1-3 00:29:39.781 http://cunit.sourceforge.net/ 00:29:39.781 00:29:39.781 00:29:39.781 Suite: bdevio tests on: nvme3n1 00:29:39.781 Test: blockdev write read block ...passed 00:29:39.781 Test: blockdev write zeroes read block ...passed 00:29:39.781 Test: blockdev write zeroes read no split ...passed 00:29:39.781 Test: blockdev write zeroes read split ...passed 00:29:39.781 Test: blockdev write zeroes read split partial ...passed 00:29:39.781 Test: blockdev reset ...passed 00:29:39.781 Test: blockdev write read 8 blocks ...passed 00:29:39.781 Test: blockdev write read size > 128k ...passed 00:29:39.781 Test: blockdev write read invalid size ...passed 00:29:39.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:39.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:39.781 Test: blockdev write read max offset ...passed 00:29:39.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:39.781 Test: blockdev writev readv 8 blocks ...passed 00:29:39.781 Test: blockdev writev readv 30 x 1block ...passed 00:29:39.781 Test: blockdev writev readv block ...passed 00:29:39.781 Test: blockdev writev readv size > 128k ...passed 00:29:39.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:39.781 Test: blockdev comparev and writev ...passed 00:29:39.781 Test: blockdev nvme passthru rw ...passed 00:29:39.781 Test: blockdev nvme passthru vendor specific ...passed 00:29:39.781 Test: blockdev nvme admin passthru ...passed 00:29:39.781 Test: blockdev copy ...passed 00:29:39.781 Suite: bdevio tests on: nvme2n3 00:29:39.781 Test: blockdev write read block ...passed 00:29:39.781 Test: blockdev write zeroes read block ...passed 00:29:39.781 Test: blockdev write zeroes read no split ...passed 00:29:40.040 Test: blockdev write zeroes read split ...passed 00:29:40.040 Test: blockdev write zeroes read split partial ...passed 00:29:40.040 Test: blockdev reset ...passed 00:29:40.040 Test: blockdev write read 8 blocks ...passed 00:29:40.040 Test: blockdev write read size > 128k ...passed 00:29:40.040 Test: blockdev write read invalid size ...passed 00:29:40.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:40.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:40.040 Test: blockdev write read max offset ...passed 00:29:40.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:40.040 Test: blockdev writev readv 8 blocks ...passed 00:29:40.040 Test: blockdev writev readv 30 x 1block ...passed 00:29:40.040 Test: blockdev writev readv block ...passed 00:29:40.040 Test: blockdev writev readv size > 128k ...passed 00:29:40.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:40.040 Test: blockdev comparev and writev ...passed 00:29:40.040 Test: blockdev nvme passthru rw ...passed 00:29:40.040 Test: blockdev nvme passthru vendor specific ...passed 00:29:40.040 Test: blockdev nvme admin passthru ...passed 00:29:40.040 Test: blockdev copy ...passed 00:29:40.040 Suite: bdevio tests on: nvme2n2 00:29:40.040 Test: blockdev write read block ...passed 00:29:40.040 Test: blockdev write zeroes read block ...passed 00:29:40.040 Test: blockdev write zeroes read no split ...passed 00:29:40.040 Test: blockdev write zeroes read split ...passed 00:29:40.040 Test: blockdev write zeroes read split partial ...passed 00:29:40.040 Test: blockdev reset ...passed 00:29:40.040 Test: blockdev write read 8 blocks ...passed 00:29:40.040 Test: blockdev write read size > 128k ...passed 00:29:40.040 Test: blockdev write read invalid size ...passed 00:29:40.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:40.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:40.040 Test: blockdev write read max offset ...passed 00:29:40.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:40.040 Test: blockdev writev readv 8 blocks ...passed 00:29:40.040 Test: blockdev writev readv 30 x 1block ...passed 00:29:40.040 Test: blockdev writev readv block ...passed 00:29:40.040 Test: blockdev writev readv size > 128k ...passed 00:29:40.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:40.040 Test: blockdev comparev and writev ...passed 00:29:40.040 Test: blockdev nvme passthru rw ...passed 00:29:40.040 Test: blockdev nvme passthru vendor specific ...passed 00:29:40.040 Test: blockdev nvme admin passthru ...passed 00:29:40.040 Test: blockdev copy ...passed 00:29:40.040 Suite: bdevio tests on: nvme2n1 00:29:40.040 Test: blockdev write read block ...passed 00:29:40.040 Test: blockdev write zeroes read block ...passed 00:29:40.040 Test: blockdev write zeroes read no split ...passed 00:29:40.040 Test: blockdev write zeroes read split ...passed 00:29:40.040 Test: blockdev write zeroes read split partial ...passed 00:29:40.040 Test: blockdev reset ...passed 00:29:40.040 Test: blockdev write read 8 blocks ...passed 00:29:40.040 Test: blockdev write read size > 128k ...passed 00:29:40.040 Test: blockdev write read invalid size ...passed 00:29:40.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:40.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:40.040 Test: blockdev write read max offset ...passed 00:29:40.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:40.040 Test: blockdev writev readv 8 blocks ...passed 00:29:40.040 Test: blockdev writev readv 30 x 1block ...passed 00:29:40.040 Test: blockdev writev readv block ...passed 00:29:40.040 Test: blockdev writev readv size > 128k ...passed 00:29:40.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:40.040 Test: blockdev comparev and writev ...passed 00:29:40.040 Test: blockdev nvme passthru rw ...passed 00:29:40.040 Test: blockdev nvme passthru vendor specific ...passed 00:29:40.040 Test: blockdev nvme admin passthru ...passed 00:29:40.040 Test: blockdev copy ...passed 00:29:40.040 Suite: bdevio tests on: nvme1n1 00:29:40.040 Test: blockdev write read block ...passed 00:29:40.040 Test: blockdev write zeroes read block ...passed 00:29:40.040 Test: blockdev write zeroes read no split ...passed 00:29:40.300 Test: blockdev write zeroes read split ...passed 00:29:40.300 Test: blockdev write zeroes read split partial ...passed 00:29:40.300 Test: blockdev reset ...passed 00:29:40.300 Test: blockdev write read 8 blocks ...passed 00:29:40.300 Test: blockdev write read size > 128k ...passed 00:29:40.300 Test: blockdev write read invalid size ...passed 00:29:40.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:40.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:40.300 Test: blockdev write read max offset ...passed 00:29:40.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:40.300 Test: blockdev writev readv 8 blocks ...passed 00:29:40.300 Test: blockdev writev readv 30 x 1block ...passed 00:29:40.300 Test: blockdev writev readv block ...passed 00:29:40.300 Test: blockdev writev readv size > 128k ...passed 00:29:40.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:40.300 Test: blockdev comparev and writev ...passed 00:29:40.300 Test: blockdev nvme passthru rw ...passed 00:29:40.300 Test: blockdev nvme passthru vendor specific ...passed 00:29:40.300 Test: blockdev nvme admin passthru ...passed 00:29:40.300 Test: blockdev copy ...passed 00:29:40.300 Suite: bdevio tests on: nvme0n1 00:29:40.300 Test: blockdev write read block ...passed 00:29:40.300 Test: blockdev write zeroes read block ...passed 00:29:40.300 Test: blockdev write zeroes read no split ...passed 00:29:40.300 Test: blockdev write zeroes read split ...passed 00:29:40.300 Test: blockdev write zeroes read split partial ...passed 00:29:40.300 Test: blockdev reset ...passed 00:29:40.300 Test: blockdev write read 8 blocks ...passed 00:29:40.300 Test: blockdev write read size > 128k ...passed 00:29:40.300 Test: blockdev write read invalid size ...passed 00:29:40.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:40.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:40.300 Test: blockdev write read max offset ...passed 00:29:40.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:40.300 Test: blockdev writev readv 8 blocks ...passed 00:29:40.300 Test: blockdev writev readv 30 x 1block ...passed 00:29:40.300 Test: blockdev writev readv block ...passed 00:29:40.300 Test: blockdev writev readv size > 128k ...passed 00:29:40.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:40.300 Test: blockdev comparev and writev ...passed 00:29:40.300 Test: blockdev nvme passthru rw ...passed 00:29:40.300 Test: blockdev nvme passthru vendor specific ...passed 00:29:40.300 Test: blockdev nvme admin passthru ...passed 00:29:40.300 Test: blockdev copy ...passed 00:29:40.300 00:29:40.300 Run Summary: Type Total Ran Passed Failed Inactive 00:29:40.300 suites 6 6 n/a 0 0 00:29:40.300 tests 138 138 138 0 0 00:29:40.300 asserts 780 780 780 0 n/a 00:29:40.300 00:29:40.300 Elapsed time = 1.692 seconds 00:29:40.300 0 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71960 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71960 ']' 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71960 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71960 00:29:40.300 killing process with pid 71960 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:40.300 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71960' 00:29:40.301 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71960 00:29:40.301 07:27:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71960 00:29:41.678 ************************************ 00:29:41.678 END TEST bdev_bounds 00:29:41.678 ************************************ 00:29:41.678 07:27:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:41.678 00:29:41.678 real 0m3.236s 00:29:41.678 user 0m7.972s 00:29:41.678 sys 0m0.552s 00:29:41.678 07:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.678 07:27:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:41.678 07:27:05 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:29:41.678 07:27:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:41.678 07:27:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.678 07:27:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:41.678 ************************************ 00:29:41.678 START TEST bdev_nbd 00:29:41.678 ************************************ 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72031 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72031 /var/tmp/spdk-nbd.sock 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72031 ']' 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.678 07:27:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:41.938 [2024-11-20 07:27:05.938660] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:29:41.938 [2024-11-20 07:27:05.939363] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.938 [2024-11-20 07:27:06.128462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.197 [2024-11-20 07:27:06.321980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:42.764 07:27:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:43.022 1+0 records in 00:29:43.022 1+0 records out 00:29:43.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645607 s, 6.3 MB/s 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:43.022 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:29:43.280 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:43.280 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:43.280 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:43.280 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:43.280 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:43.280 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:43.539 1+0 records in 00:29:43.539 1+0 records out 00:29:43.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562377 s, 7.3 MB/s 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:43.539 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:43.797 1+0 records in 00:29:43.797 1+0 records out 00:29:43.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049062 s, 8.3 MB/s 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:43.797 07:27:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.056 1+0 records in 00:29:44.056 1+0 records out 00:29:44.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655297 s, 6.3 MB/s 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:44.056 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.314 1+0 records in 00:29:44.314 1+0 records out 00:29:44.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792618 s, 5.2 MB/s 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:44.314 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:29:44.572 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:44.572 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:44.572 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.573 1+0 records in 00:29:44.573 1+0 records out 00:29:44.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811026 s, 5.1 MB/s 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:29:44.573 07:27:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:44.831 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:44.831 { 00:29:44.831 "nbd_device": "/dev/nbd0", 00:29:44.831 "bdev_name": "nvme0n1" 00:29:44.831 }, 00:29:44.831 { 00:29:44.831 "nbd_device": "/dev/nbd1", 00:29:44.831 "bdev_name": "nvme1n1" 00:29:44.831 }, 00:29:44.831 { 00:29:44.831 "nbd_device": "/dev/nbd2", 00:29:44.831 "bdev_name": "nvme2n1" 00:29:44.831 }, 00:29:44.831 { 00:29:44.832 "nbd_device": "/dev/nbd3", 00:29:44.832 "bdev_name": "nvme2n2" 00:29:44.832 }, 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd4", 00:29:44.832 "bdev_name": "nvme2n3" 00:29:44.832 }, 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd5", 00:29:44.832 "bdev_name": "nvme3n1" 00:29:44.832 } 00:29:44.832 ]' 00:29:44.832 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:44.832 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd0", 00:29:44.832 "bdev_name": "nvme0n1" 00:29:44.832 }, 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd1", 00:29:44.832 "bdev_name": "nvme1n1" 00:29:44.832 }, 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd2", 00:29:44.832 "bdev_name": "nvme2n1" 00:29:44.832 }, 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd3", 00:29:44.832 "bdev_name": "nvme2n2" 00:29:44.832 }, 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd4", 00:29:44.832 "bdev_name": "nvme2n3" 00:29:44.832 }, 00:29:44.832 { 00:29:44.832 "nbd_device": "/dev/nbd5", 00:29:44.832 "bdev_name": "nvme3n1" 00:29:44.832 } 00:29:44.832 ]' 00:29:44.832 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:45.091 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:29:45.091 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:45.091 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:29:45.091 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:45.091 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:45.091 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.091 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.349 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.607 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:45.865 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.866 07:27:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:46.124 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:46.382 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.641 07:27:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:46.899 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:29:47.465 /dev/nbd0 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.465 1+0 records in 00:29:47.465 1+0 records out 00:29:47.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495966 s, 8.3 MB/s 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:29:47.465 /dev/nbd1 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.465 1+0 records in 00:29:47.465 1+0 records out 00:29:47.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553012 s, 7.4 MB/s 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:47.465 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:29:47.723 /dev/nbd10 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:47.723 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.723 1+0 records in 00:29:47.724 1+0 records out 00:29:47.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695172 s, 5.9 MB/s 00:29:47.724 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.724 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:47.724 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.724 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:47.724 07:27:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:47.724 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:47.724 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:47.724 07:27:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:29:47.986 /dev/nbd11 00:29:47.986 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:47.986 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:47.986 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:47.987 1+0 records in 00:29:47.987 1+0 records out 00:29:47.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589377 s, 6.9 MB/s 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:47.987 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.245 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:48.245 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:48.246 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:48.246 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:48.246 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:29:48.502 /dev/nbd12 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.502 1+0 records in 00:29:48.502 1+0 records out 00:29:48.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713714 s, 5.7 MB/s 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:48.502 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:29:48.758 /dev/nbd13 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.758 1+0 records in 00:29:48.758 1+0 records out 00:29:48.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524312 s, 7.8 MB/s 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:48.758 07:27:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd0", 00:29:49.017 "bdev_name": "nvme0n1" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd1", 00:29:49.017 "bdev_name": "nvme1n1" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd10", 00:29:49.017 "bdev_name": "nvme2n1" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd11", 00:29:49.017 "bdev_name": "nvme2n2" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd12", 00:29:49.017 "bdev_name": "nvme2n3" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd13", 00:29:49.017 "bdev_name": "nvme3n1" 00:29:49.017 } 00:29:49.017 ]' 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd0", 00:29:49.017 "bdev_name": "nvme0n1" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd1", 00:29:49.017 "bdev_name": "nvme1n1" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd10", 00:29:49.017 "bdev_name": "nvme2n1" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd11", 00:29:49.017 "bdev_name": "nvme2n2" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd12", 00:29:49.017 "bdev_name": "nvme2n3" 00:29:49.017 }, 00:29:49.017 { 00:29:49.017 "nbd_device": "/dev/nbd13", 00:29:49.017 "bdev_name": "nvme3n1" 00:29:49.017 } 00:29:49.017 ]' 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:49.017 /dev/nbd1 00:29:49.017 /dev/nbd10 00:29:49.017 /dev/nbd11 00:29:49.017 /dev/nbd12 00:29:49.017 /dev/nbd13' 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:49.017 /dev/nbd1 00:29:49.017 /dev/nbd10 00:29:49.017 /dev/nbd11 00:29:49.017 /dev/nbd12 00:29:49.017 /dev/nbd13' 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:49.017 256+0 records in 00:29:49.017 256+0 records out 00:29:49.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115243 s, 91.0 MB/s 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:49.017 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:49.275 256+0 records in 00:29:49.275 256+0 records out 00:29:49.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118534 s, 8.8 MB/s 00:29:49.275 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:49.275 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:49.275 256+0 records in 00:29:49.275 256+0 records out 00:29:49.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138736 s, 7.6 MB/s 00:29:49.275 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:49.275 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:49.533 256+0 records in 00:29:49.533 256+0 records out 00:29:49.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122317 s, 8.6 MB/s 00:29:49.533 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:49.533 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:49.533 256+0 records in 00:29:49.533 256+0 records out 00:29:49.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118526 s, 8.8 MB/s 00:29:49.533 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:49.533 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:29:49.791 256+0 records in 00:29:49.791 256+0 records out 00:29:49.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12349 s, 8.5 MB/s 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:29:49.791 256+0 records in 00:29:49.791 256+0 records out 00:29:49.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129024 s, 8.1 MB/s 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.791 07:27:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.358 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.616 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:29:50.874 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:29:50.874 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:29:50.874 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:29:50.875 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.875 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.875 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:29:50.875 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:50.875 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.875 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.875 07:27:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:51.133 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:29:51.391 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:29:51.391 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:29:51.391 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:29:51.392 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.392 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.392 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:29:51.392 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:51.392 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.392 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:51.392 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:51.650 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:29:51.909 07:27:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:52.169 malloc_lvol_verify 00:29:52.169 07:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:52.428 306b5751-76ec-4805-8faa-4ab64c0e0809 00:29:52.428 07:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:52.753 6943ebd4-2dc4-44c2-9173-1b860011711b 00:29:52.753 07:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:53.028 /dev/nbd0 00:29:53.028 07:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:29:53.028 07:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:29:53.028 07:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:29:53.028 07:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:29:53.028 07:27:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:29:53.028 mke2fs 1.47.0 (5-Feb-2023) 00:29:53.028 Discarding device blocks: 0/4096 done 00:29:53.028 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:53.028 00:29:53.028 Allocating group tables: 0/1 done 00:29:53.028 Writing inode tables: 0/1 done 00:29:53.028 Creating journal (1024 blocks): done 00:29:53.028 Writing superblocks and filesystem accounting information: 0/1 done 00:29:53.028 00:29:53.028 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:53.028 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:53.028 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72031 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72031 ']' 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72031 00:29:53.029 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:29:53.288 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.288 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72031 00:29:53.288 killing process with pid 72031 00:29:53.288 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:53.288 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:53.288 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72031' 00:29:53.288 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72031 00:29:53.288 07:27:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72031 00:29:54.667 ************************************ 00:29:54.667 END TEST bdev_nbd 00:29:54.667 ************************************ 00:29:54.667 07:27:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:54.667 00:29:54.667 real 0m12.670s 00:29:54.667 user 0m16.799s 00:29:54.667 sys 0m5.212s 00:29:54.667 07:27:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.667 07:27:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:54.667 07:27:18 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:29:54.667 07:27:18 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:29:54.667 07:27:18 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:29:54.667 07:27:18 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:29:54.667 07:27:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:54.667 07:27:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.667 07:27:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:29:54.667 ************************************ 00:29:54.667 START TEST bdev_fio 00:29:54.667 ************************************ 00:29:54.667 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:29:54.667 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:29:54.667 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:29:54.667 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:29:54.667 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:29:54.668 ************************************ 00:29:54.668 START TEST bdev_fio_rw_verify 00:29:54.668 ************************************ 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:54.668 07:27:18 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:54.926 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:54.926 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:54.926 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:54.926 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:54.926 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:54.926 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:54.926 fio-3.35 00:29:54.926 Starting 6 threads 00:30:07.129 00:30:07.129 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72460: Wed Nov 20 07:27:29 2024 00:30:07.129 read: IOPS=30.9k, BW=121MiB/s (126MB/s)(1206MiB/10001msec) 00:30:07.129 slat (usec): min=2, max=746, avg= 6.33, stdev= 4.27 00:30:07.129 clat (usec): min=120, max=602669, avg=618.27, stdev=3493.00 00:30:07.129 lat (usec): min=126, max=602677, avg=624.60, stdev=3493.06 00:30:07.129 clat percentiles (usec): 00:30:07.129 | 50.000th=[ 594], 99.000th=[ 1172], 99.900th=[ 1876], 00:30:07.129 | 99.990th=[ 4228], 99.999th=[599786] 00:30:07.129 write: IOPS=31.2k, BW=122MiB/s (128MB/s)(1218MiB/10001msec); 0 zone resets 00:30:07.129 slat (usec): min=13, max=3484, avg=25.29, stdev=30.57 00:30:07.129 clat (usec): min=101, max=6470, avg=670.61, stdev=246.23 00:30:07.129 lat (usec): min=118, max=6493, avg=695.90, stdev=249.70 00:30:07.129 clat percentiles (usec): 00:30:07.129 | 50.000th=[ 660], 99.000th=[ 1352], 99.900th=[ 1909], 99.990th=[ 2966], 00:30:07.129 | 99.999th=[ 6390] 00:30:07.129 bw ( KiB/s): min=91271, max=157551, per=100.00%, avg=126192.68, stdev=2995.10, samples=113 00:30:07.129 iops : min=22817, max=39386, avg=31547.64, stdev=748.79, samples=113 00:30:07.129 lat (usec) : 250=4.01%, 500=25.99%, 750=40.36%, 1000=24.06% 00:30:07.129 lat (msec) : 2=5.50%, 4=0.07%, 10=0.01%, 500=0.01%, 750=0.01% 00:30:07.129 cpu : usr=54.67%, sys=30.83%, ctx=6851, majf=0, minf=25961 00:30:07.129 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:07.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.129 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.129 issued rwts: total=308788,311786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.129 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:07.130 00:30:07.130 Run status group 0 (all jobs): 00:30:07.130 READ: bw=121MiB/s (126MB/s), 121MiB/s-121MiB/s (126MB/s-126MB/s), io=1206MiB (1265MB), run=10001-10001msec 00:30:07.130 WRITE: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=1218MiB (1277MB), run=10001-10001msec 00:30:07.426 ----------------------------------------------------- 00:30:07.426 Suppressions used: 00:30:07.426 count bytes template 00:30:07.426 6 48 /usr/src/fio/parse.c 00:30:07.426 2772 266112 /usr/src/fio/iolog.c 00:30:07.426 1 8 libtcmalloc_minimal.so 00:30:07.426 1 904 libcrypto.so 00:30:07.426 ----------------------------------------------------- 00:30:07.426 00:30:07.426 00:30:07.426 real 0m12.775s 00:30:07.426 user 0m35.057s 00:30:07.426 sys 0m18.915s 00:30:07.426 07:27:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.426 07:27:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:30:07.426 ************************************ 00:30:07.426 END TEST bdev_fio_rw_verify 00:30:07.426 ************************************ 00:30:07.426 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2554c906-02ff-4ad2-8b69-04dd5160b142"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2554c906-02ff-4ad2-8b69-04dd5160b142",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b8e34561-ad27-42e5-842d-b797fa775987"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b8e34561-ad27-42e5-842d-b797fa775987",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "5781d28c-bc88-4cc9-8211-1398a3b231f3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5781d28c-bc88-4cc9-8211-1398a3b231f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "242b3778-a219-4909-bbf2-24df3e54cfe7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "242b3778-a219-4909-bbf2-24df3e54cfe7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ce6ab382-d118-4270-8ff4-2ea7a4790349"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce6ab382-d118-4270-8ff4-2ea7a4790349",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0e165098-9eca-40e7-8ef5-ec32b9da3f50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0e165098-9eca-40e7-8ef5-ec32b9da3f50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:07.427 /home/vagrant/spdk_repo/spdk 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:30:07.427 00:30:07.427 real 0m12.986s 00:30:07.427 user 0m35.159s 00:30:07.427 sys 0m19.026s 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.427 07:27:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:30:07.427 ************************************ 00:30:07.427 END TEST bdev_fio 00:30:07.427 ************************************ 00:30:07.427 07:27:31 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:07.427 07:27:31 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:07.427 07:27:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:07.427 07:27:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.427 07:27:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:07.427 ************************************ 00:30:07.427 START TEST bdev_verify 00:30:07.427 ************************************ 00:30:07.427 07:27:31 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:07.686 [2024-11-20 07:27:31.716869] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:07.686 [2024-11-20 07:27:31.717009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72641 ] 00:30:07.944 [2024-11-20 07:27:31.905639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:07.945 [2024-11-20 07:27:32.083915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.945 [2024-11-20 07:27:32.083928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.512 Running I/O for 5 seconds... 00:30:10.828 22592.00 IOPS, 88.25 MiB/s [2024-11-20T07:27:35.966Z] 23824.00 IOPS, 93.06 MiB/s [2024-11-20T07:27:36.902Z] 23104.00 IOPS, 90.25 MiB/s [2024-11-20T07:27:37.879Z] 23240.00 IOPS, 90.78 MiB/s [2024-11-20T07:27:37.879Z] 22617.60 IOPS, 88.35 MiB/s 00:30:13.676 Latency(us) 00:30:13.676 [2024-11-20T07:27:37.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.676 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.676 Verification LBA range: start 0x0 length 0xa0000 00:30:13.676 nvme0n1 : 5.03 1604.71 6.27 0.00 0.00 79621.17 14293.09 87880.66 00:30:13.677 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0xa0000 length 0xa0000 00:30:13.677 nvme0n1 : 5.03 1732.08 6.77 0.00 0.00 73773.62 8113.98 80890.15 00:30:13.677 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x0 length 0xbd0bd 00:30:13.677 nvme1n1 : 5.07 2746.54 10.73 0.00 0.00 46219.61 5398.92 75896.93 00:30:13.677 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:30:13.677 nvme1n1 : 5.06 2726.97 10.65 0.00 0.00 46753.66 3604.48 67907.78 00:30:13.677 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x0 length 0x80000 00:30:13.677 nvme2n1 : 5.06 1619.25 6.33 0.00 0.00 78546.72 9112.62 76895.57 00:30:13.677 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x80000 length 0x80000 00:30:13.677 nvme2n1 : 5.06 1770.98 6.92 0.00 0.00 71855.05 6023.07 72401.68 00:30:13.677 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x0 length 0x80000 00:30:13.677 nvme2n2 : 5.07 1615.23 6.31 0.00 0.00 78610.59 19223.89 74898.29 00:30:13.677 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x80000 length 0x80000 00:30:13.677 nvme2n2 : 5.07 1765.99 6.90 0.00 0.00 71936.34 8550.89 66409.81 00:30:13.677 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x0 length 0x80000 00:30:13.677 nvme2n3 : 5.08 1613.84 6.30 0.00 0.00 78554.77 12170.97 85384.05 00:30:13.677 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x80000 length 0x80000 00:30:13.677 nvme2n3 : 5.07 1767.82 6.91 0.00 0.00 71736.27 4993.22 71403.03 00:30:13.677 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x0 length 0x20000 00:30:13.677 nvme3n1 : 5.08 1613.44 6.30 0.00 0.00 78439.15 10985.08 86882.01 00:30:13.677 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:13.677 Verification LBA range: start 0x20000 length 0x20000 00:30:13.677 nvme3n1 : 5.08 1765.46 6.90 0.00 0.00 71706.95 8176.40 78892.86 00:30:13.677 [2024-11-20T07:27:37.880Z] =================================================================================================================== 00:30:13.677 [2024-11-20T07:27:37.880Z] Total : 22342.33 87.27 0.00 0.00 68261.38 3604.48 87880.66 00:30:15.056 00:30:15.056 real 0m7.376s 00:30:15.056 user 0m11.612s 00:30:15.056 sys 0m1.917s 00:30:15.056 07:27:38 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.056 07:27:38 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:30:15.056 ************************************ 00:30:15.056 END TEST bdev_verify 00:30:15.056 ************************************ 00:30:15.056 07:27:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:15.056 07:27:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:30:15.056 07:27:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.056 07:27:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:15.056 ************************************ 00:30:15.056 START TEST bdev_verify_big_io 00:30:15.056 ************************************ 00:30:15.056 07:27:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:15.056 [2024-11-20 07:27:39.121395] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:15.056 [2024-11-20 07:27:39.121788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72741 ] 00:30:15.315 [2024-11-20 07:27:39.301001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:15.315 [2024-11-20 07:27:39.426086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.315 [2024-11-20 07:27:39.426101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.883 Running I/O for 5 seconds... 00:30:21.707 1736.00 IOPS, 108.50 MiB/s [2024-11-20T07:27:46.169Z] 3300.00 IOPS, 206.25 MiB/s [2024-11-20T07:27:46.169Z] 3218.67 IOPS, 201.17 MiB/s 00:30:21.967 Latency(us) 00:30:21.967 [2024-11-20T07:27:46.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.967 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x0 length 0xa000 00:30:21.967 nvme0n1 : 5.87 125.37 7.84 0.00 0.00 997156.70 121834.54 1134459.37 00:30:21.967 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0xa000 length 0xa000 00:30:21.967 nvme0n1 : 5.85 164.02 10.25 0.00 0.00 763212.75 93872.52 922746.88 00:30:21.967 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x0 length 0xbd0b 00:30:21.967 nvme1n1 : 5.87 163.44 10.21 0.00 0.00 746503.80 25215.76 1470003.69 00:30:21.967 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0xbd0b length 0xbd0b 00:30:21.967 nvme1n1 : 5.87 152.60 9.54 0.00 0.00 798845.95 77894.22 719023.54 00:30:21.967 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x0 length 0x8000 00:30:21.967 nvme2n1 : 5.88 125.19 7.82 0.00 0.00 949291.00 116342.00 1693699.90 00:30:21.967 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x8000 length 0x8000 00:30:21.967 nvme2n1 : 5.88 138.83 8.68 0.00 0.00 853374.33 20472.20 1422068.78 00:30:21.967 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x0 length 0x8000 00:30:21.967 nvme2n2 : 5.88 156.42 9.78 0.00 0.00 741175.52 100363.70 866822.83 00:30:21.967 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x8000 length 0x8000 00:30:21.967 nvme2n2 : 5.86 147.46 9.22 0.00 0.00 779279.68 87381.33 838860.80 00:30:21.967 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x0 length 0x8000 00:30:21.967 nvme2n3 : 5.88 133.25 8.33 0.00 0.00 847898.91 91375.91 1414079.63 00:30:21.967 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x8000 length 0x8000 00:30:21.967 nvme2n3 : 5.88 117.00 7.31 0.00 0.00 955415.90 14293.09 2205005.53 00:30:21.967 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x0 length 0x2000 00:30:21.967 nvme3n1 : 5.89 179.38 11.21 0.00 0.00 619633.66 12919.95 942719.76 00:30:21.967 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:21.967 Verification LBA range: start 0x2000 length 0x2000 00:30:21.967 nvme3n1 : 5.88 117.10 7.32 0.00 0.00 931236.71 7271.38 1877450.36 00:30:21.967 [2024-11-20T07:27:46.170Z] =================================================================================================================== 00:30:21.967 [2024-11-20T07:27:46.170Z] Total : 1720.08 107.50 0.00 0.00 818123.00 7271.38 2205005.53 00:30:23.870 00:30:23.870 real 0m8.632s 00:30:23.870 user 0m15.592s 00:30:23.870 sys 0m0.623s 00:30:23.870 07:27:47 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.870 07:27:47 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:30:23.870 ************************************ 00:30:23.870 END TEST bdev_verify_big_io 00:30:23.870 ************************************ 00:30:23.870 07:27:47 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:23.870 07:27:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:23.870 07:27:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.870 07:27:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:23.870 ************************************ 00:30:23.870 START TEST bdev_write_zeroes 00:30:23.870 ************************************ 00:30:23.870 07:27:47 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:23.870 [2024-11-20 07:27:47.843255] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:23.870 [2024-11-20 07:27:47.843393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72862 ] 00:30:23.870 [2024-11-20 07:27:48.019293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.128 [2024-11-20 07:27:48.178411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.695 Running I/O for 1 seconds... 00:30:25.630 70208.00 IOPS, 274.25 MiB/s 00:30:25.630 Latency(us) 00:30:25.630 [2024-11-20T07:27:49.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.630 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.630 nvme0n1 : 1.02 10211.53 39.89 0.00 0.00 12523.81 8363.64 17351.44 00:30:25.630 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.630 nvme1n1 : 1.02 18785.42 73.38 0.00 0.00 6780.40 3729.31 15915.89 00:30:25.630 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.630 nvme2n1 : 1.02 10266.65 40.10 0.00 0.00 12352.90 5118.05 16602.45 00:30:25.630 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.630 nvme2n2 : 1.02 10189.24 39.80 0.00 0.00 12436.12 7989.15 16602.45 00:30:25.630 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.630 nvme2n3 : 1.02 10178.95 39.76 0.00 0.00 12440.37 8176.40 16602.45 00:30:25.630 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.630 nvme3n1 : 1.02 10169.02 39.72 0.00 0.00 12442.60 8301.23 16727.28 00:30:25.630 [2024-11-20T07:27:49.833Z] =================================================================================================================== 00:30:25.630 [2024-11-20T07:27:49.833Z] Total : 69800.81 272.66 0.00 0.00 10914.22 3729.31 17351.44 00:30:27.007 00:30:27.007 real 0m3.348s 00:30:27.007 user 0m2.426s 00:30:27.007 sys 0m0.755s 00:30:27.007 07:27:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.007 ************************************ 00:30:27.007 END TEST bdev_write_zeroes 00:30:27.007 ************************************ 00:30:27.007 07:27:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:30:27.007 07:27:51 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:27.007 07:27:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:27.007 07:27:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.007 07:27:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:27.007 ************************************ 00:30:27.007 START TEST bdev_json_nonenclosed 00:30:27.007 ************************************ 00:30:27.007 07:27:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:27.266 [2024-11-20 07:27:51.253828] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:27.266 [2024-11-20 07:27:51.254003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72921 ] 00:30:27.266 [2024-11-20 07:27:51.449866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.527 [2024-11-20 07:27:51.598286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.527 [2024-11-20 07:27:51.598416] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:27.527 [2024-11-20 07:27:51.598444] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:27.527 [2024-11-20 07:27:51.598459] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:27.785 00:30:27.785 real 0m0.754s 00:30:27.785 user 0m0.464s 00:30:27.785 sys 0m0.184s 00:30:27.785 07:27:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.785 07:27:51 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:30:27.785 ************************************ 00:30:27.785 END TEST bdev_json_nonenclosed 00:30:27.785 ************************************ 00:30:27.785 07:27:51 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:27.785 07:27:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:30:27.785 07:27:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.785 07:27:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:27.785 ************************************ 00:30:27.785 START TEST bdev_json_nonarray 00:30:27.785 ************************************ 00:30:27.785 07:27:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:28.050 [2024-11-20 07:27:52.061701] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:28.050 [2024-11-20 07:27:52.061964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72952 ] 00:30:28.050 [2024-11-20 07:27:52.242593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.313 [2024-11-20 07:27:52.397628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.313 [2024-11-20 07:27:52.397775] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:28.313 [2024-11-20 07:27:52.397804] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:28.313 [2024-11-20 07:27:52.397831] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:28.572 00:30:28.572 real 0m0.747s 00:30:28.572 user 0m0.469s 00:30:28.572 sys 0m0.172s 00:30:28.572 07:27:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.572 07:27:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:30:28.572 ************************************ 00:30:28.572 END TEST bdev_json_nonarray 00:30:28.572 ************************************ 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:30:28.572 07:27:52 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:29.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:30.517 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:30.517 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:30.517 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:30.517 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:30.776 00:30:30.776 real 1m5.315s 00:30:30.776 user 1m44.047s 00:30:30.776 sys 0m32.561s 00:30:30.776 07:27:54 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.776 07:27:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:30.776 ************************************ 00:30:30.776 END TEST blockdev_xnvme 00:30:30.776 ************************************ 00:30:30.776 07:27:54 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:30:30.776 07:27:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:30.776 07:27:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.776 07:27:54 -- common/autotest_common.sh@10 -- # set +x 00:30:30.776 ************************************ 00:30:30.776 START TEST ublk 00:30:30.776 ************************************ 00:30:30.776 07:27:54 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:30:30.776 * Looking for test storage... 00:30:30.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:30:30.776 07:27:54 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:30.776 07:27:54 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:30:30.776 07:27:54 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:31.044 07:27:55 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.044 07:27:55 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.044 07:27:55 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.044 07:27:55 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.044 07:27:55 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.044 07:27:55 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.044 07:27:55 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.044 07:27:55 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.044 07:27:55 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.044 07:27:55 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.044 07:27:55 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.044 07:27:55 ublk -- scripts/common.sh@344 -- # case "$op" in 00:30:31.044 07:27:55 ublk -- scripts/common.sh@345 -- # : 1 00:30:31.044 07:27:55 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.044 07:27:55 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.044 07:27:55 ublk -- scripts/common.sh@365 -- # decimal 1 00:30:31.044 07:27:55 ublk -- scripts/common.sh@353 -- # local d=1 00:30:31.044 07:27:55 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.044 07:27:55 ublk -- scripts/common.sh@355 -- # echo 1 00:30:31.044 07:27:55 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.044 07:27:55 ublk -- scripts/common.sh@366 -- # decimal 2 00:30:31.044 07:27:55 ublk -- scripts/common.sh@353 -- # local d=2 00:30:31.044 07:27:55 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.044 07:27:55 ublk -- scripts/common.sh@355 -- # echo 2 00:30:31.044 07:27:55 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.044 07:27:55 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.044 07:27:55 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.044 07:27:55 ublk -- scripts/common.sh@368 -- # return 0 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:31.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.044 --rc genhtml_branch_coverage=1 00:30:31.044 --rc genhtml_function_coverage=1 00:30:31.044 --rc genhtml_legend=1 00:30:31.044 --rc geninfo_all_blocks=1 00:30:31.044 --rc geninfo_unexecuted_blocks=1 00:30:31.044 00:30:31.044 ' 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:31.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.044 --rc genhtml_branch_coverage=1 00:30:31.044 --rc genhtml_function_coverage=1 00:30:31.044 --rc genhtml_legend=1 00:30:31.044 --rc geninfo_all_blocks=1 00:30:31.044 --rc geninfo_unexecuted_blocks=1 00:30:31.044 00:30:31.044 ' 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:31.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.044 --rc genhtml_branch_coverage=1 00:30:31.044 --rc genhtml_function_coverage=1 00:30:31.044 --rc genhtml_legend=1 00:30:31.044 --rc geninfo_all_blocks=1 00:30:31.044 --rc geninfo_unexecuted_blocks=1 00:30:31.044 00:30:31.044 ' 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:31.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.044 --rc genhtml_branch_coverage=1 00:30:31.044 --rc genhtml_function_coverage=1 00:30:31.044 --rc genhtml_legend=1 00:30:31.044 --rc geninfo_all_blocks=1 00:30:31.044 --rc geninfo_unexecuted_blocks=1 00:30:31.044 00:30:31.044 ' 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:30:31.044 07:27:55 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:30:31.044 07:27:55 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:30:31.044 07:27:55 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:30:31.044 07:27:55 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:30:31.044 07:27:55 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:30:31.044 07:27:55 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:30:31.044 07:27:55 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:30:31.044 07:27:55 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:30:31.044 07:27:55 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.044 07:27:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:30:31.044 ************************************ 00:30:31.044 START TEST test_save_ublk_config 00:30:31.044 ************************************ 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73245 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73245 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:30:31.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73245 ']' 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.044 07:27:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:30:31.044 [2024-11-20 07:27:55.213650] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:31.045 [2024-11-20 07:27:55.214148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73245 ] 00:30:31.303 [2024-11-20 07:27:55.416070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.563 [2024-11-20 07:27:55.600430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.500 07:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.500 07:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:30:32.500 07:27:56 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:30:32.500 07:27:56 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:30:32.500 07:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.500 07:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:30:32.500 [2024-11-20 07:27:56.616841] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:30:32.500 [2024-11-20 07:27:56.618138] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:30:32.500 malloc0 00:30:32.759 [2024-11-20 07:27:56.704088] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:30:32.759 [2024-11-20 07:27:56.704242] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:30:32.759 [2024-11-20 07:27:56.704266] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:30:32.759 [2024-11-20 07:27:56.704285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:30:32.759 [2024-11-20 07:27:56.712051] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:30:32.759 [2024-11-20 07:27:56.712093] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:30:32.759 [2024-11-20 07:27:56.719874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:30:32.759 [2024-11-20 07:27:56.720003] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:30:32.759 [2024-11-20 07:27:56.736901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:30:32.759 0 00:30:32.759 07:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.759 07:27:56 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:30:32.759 07:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.759 07:27:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:30:33.017 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.018 07:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:30:33.018 "subsystems": [ 00:30:33.018 { 00:30:33.018 "subsystem": "fsdev", 00:30:33.018 "config": [ 00:30:33.018 { 00:30:33.018 "method": "fsdev_set_opts", 00:30:33.018 "params": { 00:30:33.018 "fsdev_io_pool_size": 65535, 00:30:33.018 "fsdev_io_cache_size": 256 00:30:33.018 } 00:30:33.018 } 00:30:33.018 ] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "keyring", 00:30:33.018 "config": [] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "iobuf", 00:30:33.018 "config": [ 00:30:33.018 { 00:30:33.018 "method": "iobuf_set_options", 00:30:33.018 "params": { 00:30:33.018 "small_pool_count": 8192, 00:30:33.018 "large_pool_count": 1024, 00:30:33.018 "small_bufsize": 8192, 00:30:33.018 "large_bufsize": 135168, 00:30:33.018 "enable_numa": false 00:30:33.018 } 00:30:33.018 } 00:30:33.018 ] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "sock", 00:30:33.018 "config": [ 00:30:33.018 { 00:30:33.018 "method": "sock_set_default_impl", 00:30:33.018 "params": { 00:30:33.018 "impl_name": "posix" 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "sock_impl_set_options", 00:30:33.018 "params": { 00:30:33.018 "impl_name": "ssl", 00:30:33.018 "recv_buf_size": 4096, 00:30:33.018 "send_buf_size": 4096, 00:30:33.018 "enable_recv_pipe": true, 00:30:33.018 "enable_quickack": false, 00:30:33.018 "enable_placement_id": 0, 00:30:33.018 "enable_zerocopy_send_server": true, 00:30:33.018 "enable_zerocopy_send_client": false, 00:30:33.018 "zerocopy_threshold": 0, 00:30:33.018 "tls_version": 0, 00:30:33.018 "enable_ktls": false 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "sock_impl_set_options", 00:30:33.018 "params": { 00:30:33.018 "impl_name": "posix", 00:30:33.018 "recv_buf_size": 2097152, 00:30:33.018 "send_buf_size": 2097152, 00:30:33.018 "enable_recv_pipe": true, 00:30:33.018 "enable_quickack": false, 00:30:33.018 "enable_placement_id": 0, 00:30:33.018 "enable_zerocopy_send_server": true, 00:30:33.018 "enable_zerocopy_send_client": false, 00:30:33.018 "zerocopy_threshold": 0, 00:30:33.018 "tls_version": 0, 00:30:33.018 "enable_ktls": false 00:30:33.018 } 00:30:33.018 } 00:30:33.018 ] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "vmd", 00:30:33.018 "config": [] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "accel", 00:30:33.018 "config": [ 00:30:33.018 { 00:30:33.018 "method": "accel_set_options", 00:30:33.018 "params": { 00:30:33.018 "small_cache_size": 128, 00:30:33.018 "large_cache_size": 16, 00:30:33.018 "task_count": 2048, 00:30:33.018 "sequence_count": 2048, 00:30:33.018 "buf_count": 2048 00:30:33.018 } 00:30:33.018 } 00:30:33.018 ] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "bdev", 00:30:33.018 "config": [ 00:30:33.018 { 00:30:33.018 "method": "bdev_set_options", 00:30:33.018 "params": { 00:30:33.018 "bdev_io_pool_size": 65535, 00:30:33.018 "bdev_io_cache_size": 256, 00:30:33.018 "bdev_auto_examine": true, 00:30:33.018 "iobuf_small_cache_size": 128, 00:30:33.018 "iobuf_large_cache_size": 16 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "bdev_raid_set_options", 00:30:33.018 "params": { 00:30:33.018 "process_window_size_kb": 1024, 00:30:33.018 "process_max_bandwidth_mb_sec": 0 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "bdev_iscsi_set_options", 00:30:33.018 "params": { 00:30:33.018 "timeout_sec": 30 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "bdev_nvme_set_options", 00:30:33.018 "params": { 00:30:33.018 "action_on_timeout": "none", 00:30:33.018 "timeout_us": 0, 00:30:33.018 "timeout_admin_us": 0, 00:30:33.018 "keep_alive_timeout_ms": 10000, 00:30:33.018 "arbitration_burst": 0, 00:30:33.018 "low_priority_weight": 0, 00:30:33.018 "medium_priority_weight": 0, 00:30:33.018 "high_priority_weight": 0, 00:30:33.018 "nvme_adminq_poll_period_us": 10000, 00:30:33.018 "nvme_ioq_poll_period_us": 0, 00:30:33.018 "io_queue_requests": 0, 00:30:33.018 "delay_cmd_submit": true, 00:30:33.018 "transport_retry_count": 4, 00:30:33.018 "bdev_retry_count": 3, 00:30:33.018 "transport_ack_timeout": 0, 00:30:33.018 "ctrlr_loss_timeout_sec": 0, 00:30:33.018 "reconnect_delay_sec": 0, 00:30:33.018 "fast_io_fail_timeout_sec": 0, 00:30:33.018 "disable_auto_failback": false, 00:30:33.018 "generate_uuids": false, 00:30:33.018 "transport_tos": 0, 00:30:33.018 "nvme_error_stat": false, 00:30:33.018 "rdma_srq_size": 0, 00:30:33.018 "io_path_stat": false, 00:30:33.018 "allow_accel_sequence": false, 00:30:33.018 "rdma_max_cq_size": 0, 00:30:33.018 "rdma_cm_event_timeout_ms": 0, 00:30:33.018 "dhchap_digests": [ 00:30:33.018 "sha256", 00:30:33.018 "sha384", 00:30:33.018 "sha512" 00:30:33.018 ], 00:30:33.018 "dhchap_dhgroups": [ 00:30:33.018 "null", 00:30:33.018 "ffdhe2048", 00:30:33.018 "ffdhe3072", 00:30:33.018 "ffdhe4096", 00:30:33.018 "ffdhe6144", 00:30:33.018 "ffdhe8192" 00:30:33.018 ] 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "bdev_nvme_set_hotplug", 00:30:33.018 "params": { 00:30:33.018 "period_us": 100000, 00:30:33.018 "enable": false 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "bdev_malloc_create", 00:30:33.018 "params": { 00:30:33.018 "name": "malloc0", 00:30:33.018 "num_blocks": 8192, 00:30:33.018 "block_size": 4096, 00:30:33.018 "physical_block_size": 4096, 00:30:33.018 "uuid": "cfa9119c-21be-4852-9b1d-19fd5521447a", 00:30:33.018 "optimal_io_boundary": 0, 00:30:33.018 "md_size": 0, 00:30:33.018 "dif_type": 0, 00:30:33.018 "dif_is_head_of_md": false, 00:30:33.018 "dif_pi_format": 0 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "bdev_wait_for_examine" 00:30:33.018 } 00:30:33.018 ] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "scsi", 00:30:33.018 "config": null 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "scheduler", 00:30:33.018 "config": [ 00:30:33.018 { 00:30:33.018 "method": "framework_set_scheduler", 00:30:33.018 "params": { 00:30:33.018 "name": "static" 00:30:33.018 } 00:30:33.018 } 00:30:33.018 ] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "vhost_scsi", 00:30:33.018 "config": [] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "vhost_blk", 00:30:33.018 "config": [] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "ublk", 00:30:33.018 "config": [ 00:30:33.018 { 00:30:33.018 "method": "ublk_create_target", 00:30:33.018 "params": { 00:30:33.018 "cpumask": "1" 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "ublk_start_disk", 00:30:33.018 "params": { 00:30:33.018 "bdev_name": "malloc0", 00:30:33.018 "ublk_id": 0, 00:30:33.018 "num_queues": 1, 00:30:33.018 "queue_depth": 128 00:30:33.018 } 00:30:33.018 } 00:30:33.018 ] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "nbd", 00:30:33.018 "config": [] 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "subsystem": "nvmf", 00:30:33.018 "config": [ 00:30:33.018 { 00:30:33.018 "method": "nvmf_set_config", 00:30:33.018 "params": { 00:30:33.018 "discovery_filter": "match_any", 00:30:33.018 "admin_cmd_passthru": { 00:30:33.018 "identify_ctrlr": false 00:30:33.018 }, 00:30:33.018 "dhchap_digests": [ 00:30:33.018 "sha256", 00:30:33.018 "sha384", 00:30:33.018 "sha512" 00:30:33.018 ], 00:30:33.018 "dhchap_dhgroups": [ 00:30:33.018 "null", 00:30:33.018 "ffdhe2048", 00:30:33.018 "ffdhe3072", 00:30:33.018 "ffdhe4096", 00:30:33.018 "ffdhe6144", 00:30:33.018 "ffdhe8192" 00:30:33.018 ] 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "nvmf_set_max_subsystems", 00:30:33.018 "params": { 00:30:33.018 "max_subsystems": 1024 00:30:33.018 } 00:30:33.018 }, 00:30:33.018 { 00:30:33.018 "method": "nvmf_set_crdt", 00:30:33.019 "params": { 00:30:33.019 "crdt1": 0, 00:30:33.019 "crdt2": 0, 00:30:33.019 "crdt3": 0 00:30:33.019 } 00:30:33.019 } 00:30:33.019 ] 00:30:33.019 }, 00:30:33.019 { 00:30:33.019 "subsystem": "iscsi", 00:30:33.019 "config": [ 00:30:33.019 { 00:30:33.019 "method": "iscsi_set_options", 00:30:33.019 "params": { 00:30:33.019 "node_base": "iqn.2016-06.io.spdk", 00:30:33.019 "max_sessions": 128, 00:30:33.019 "max_connections_per_session": 2, 00:30:33.019 "max_queue_depth": 64, 00:30:33.019 "default_time2wait": 2, 00:30:33.019 "default_time2retain": 20, 00:30:33.019 "first_burst_length": 8192, 00:30:33.019 "immediate_data": true, 00:30:33.019 "allow_duplicated_isid": false, 00:30:33.019 "error_recovery_level": 0, 00:30:33.019 "nop_timeout": 60, 00:30:33.019 "nop_in_interval": 30, 00:30:33.019 "disable_chap": false, 00:30:33.019 "require_chap": false, 00:30:33.019 "mutual_chap": false, 00:30:33.019 "chap_group": 0, 00:30:33.019 "max_large_datain_per_connection": 64, 00:30:33.019 "max_r2t_per_connection": 4, 00:30:33.019 "pdu_pool_size": 36864, 00:30:33.019 "immediate_data_pool_size": 16384, 00:30:33.019 "data_out_pool_size": 2048 00:30:33.019 } 00:30:33.019 } 00:30:33.019 ] 00:30:33.019 } 00:30:33.019 ] 00:30:33.019 }' 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73245 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73245 ']' 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73245 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73245 00:30:33.019 killing process with pid 73245 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73245' 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73245 00:30:33.019 07:27:57 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73245 00:30:34.923 [2024-11-20 07:27:58.643301] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:30:34.923 [2024-11-20 07:27:58.689915] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:34.923 [2024-11-20 07:27:58.690114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:30:34.923 [2024-11-20 07:27:58.700851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:34.923 [2024-11-20 07:27:58.700949] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:30:34.923 [2024-11-20 07:27:58.700969] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:30:34.923 [2024-11-20 07:27:58.701006] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:34.923 [2024-11-20 07:27:58.701187] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:36.847 07:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:30:36.847 07:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73316 00:30:36.847 07:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73316 00:30:36.847 07:28:00 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:30:36.847 "subsystems": [ 00:30:36.847 { 00:30:36.847 "subsystem": "fsdev", 00:30:36.847 "config": [ 00:30:36.847 { 00:30:36.847 "method": "fsdev_set_opts", 00:30:36.847 "params": { 00:30:36.847 "fsdev_io_pool_size": 65535, 00:30:36.847 "fsdev_io_cache_size": 256 00:30:36.847 } 00:30:36.847 } 00:30:36.847 ] 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "subsystem": "keyring", 00:30:36.847 "config": [] 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "subsystem": "iobuf", 00:30:36.847 "config": [ 00:30:36.847 { 00:30:36.847 "method": "iobuf_set_options", 00:30:36.847 "params": { 00:30:36.847 "small_pool_count": 8192, 00:30:36.847 "large_pool_count": 1024, 00:30:36.847 "small_bufsize": 8192, 00:30:36.847 "large_bufsize": 135168, 00:30:36.847 "enable_numa": false 00:30:36.847 } 00:30:36.847 } 00:30:36.847 ] 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "subsystem": "sock", 00:30:36.847 "config": [ 00:30:36.847 { 00:30:36.847 "method": "sock_set_default_impl", 00:30:36.847 "params": { 00:30:36.847 "impl_name": "posix" 00:30:36.847 } 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "method": "sock_impl_set_options", 00:30:36.847 "params": { 00:30:36.847 "impl_name": "ssl", 00:30:36.847 "recv_buf_size": 4096, 00:30:36.847 "send_buf_size": 4096, 00:30:36.847 "enable_recv_pipe": true, 00:30:36.847 "enable_quickack": false, 00:30:36.847 "enable_placement_id": 0, 00:30:36.847 "enable_zerocopy_send_server": true, 00:30:36.847 "enable_zerocopy_send_client": false, 00:30:36.847 "zerocopy_threshold": 0, 00:30:36.847 "tls_version": 0, 00:30:36.847 "enable_ktls": false 00:30:36.847 } 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "method": "sock_impl_set_options", 00:30:36.847 "params": { 00:30:36.847 "impl_name": "posix", 00:30:36.847 "recv_buf_size": 2097152, 00:30:36.847 "send_buf_size": 2097152, 00:30:36.847 "enable_recv_pipe": true, 00:30:36.847 "enable_quickack": false, 00:30:36.847 "enable_placement_id": 0, 00:30:36.847 "enable_zerocopy_send_server": true, 00:30:36.847 "enable_zerocopy_send_client": false, 00:30:36.847 "zerocopy_threshold": 0, 00:30:36.847 "tls_version": 0, 00:30:36.847 "enable_ktls": false 00:30:36.847 } 00:30:36.847 } 00:30:36.847 ] 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "subsystem": "vmd", 00:30:36.847 "config": [] 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "subsystem": "accel", 00:30:36.847 "config": [ 00:30:36.847 { 00:30:36.847 "method": "accel_set_options", 00:30:36.847 "params": { 00:30:36.847 "small_cache_size": 128, 00:30:36.847 "large_cache_size": 16, 00:30:36.847 "task_count": 2048, 00:30:36.847 "sequence_count": 2048, 00:30:36.847 "buf_count": 2048 00:30:36.847 } 00:30:36.847 } 00:30:36.847 ] 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "subsystem": "bdev", 00:30:36.847 "config": [ 00:30:36.847 { 00:30:36.847 "method": "bdev_set_options", 00:30:36.847 "params": { 00:30:36.847 "bdev_io_pool_size": 65535, 00:30:36.847 "bdev_io_cache_size": 256, 00:30:36.847 "bdev_auto_examine": true, 00:30:36.847 "iobuf_small_cache_size": 128, 00:30:36.847 "iobuf_large_cache_size": 16 00:30:36.847 } 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "method": "bdev_raid_set_options", 00:30:36.847 "params": { 00:30:36.847 "process_window_size_kb": 1024, 00:30:36.847 "process_max_bandwidth_mb_sec": 0 00:30:36.847 } 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "method": "bdev_iscsi_set_options", 00:30:36.847 "params": { 00:30:36.847 "timeout_sec": 30 00:30:36.847 } 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "method": "bdev_nvme_set_options", 00:30:36.847 "params": { 00:30:36.847 "action_on_timeout": "none", 00:30:36.847 "timeout_us": 0, 00:30:36.847 "timeout_admin_us": 0, 00:30:36.847 "keep_alive_timeout_ms": 10000, 00:30:36.847 "arbitration_burst": 0, 00:30:36.847 "low_priority_weight": 0, 00:30:36.847 "medium_priority_weight": 0, 00:30:36.847 "high_priority_weight": 0, 00:30:36.847 "nvme_adminq_poll_period_us": 10000, 00:30:36.847 "nvme_ioq_poll_period_us": 0, 00:30:36.847 "io_queue_requests": 0, 00:30:36.847 "delay_cmd_submit": true, 00:30:36.847 "transport_retry_count": 4, 00:30:36.847 "bdev_retry_count": 3, 00:30:36.847 "transport_ack_timeout": 0, 00:30:36.847 "ctrlr_loss_timeout_sec": 0, 00:30:36.847 "reconnect_delay_sec": 0, 00:30:36.847 "fast_io_fail_timeout_sec": 0, 00:30:36.847 "disable_auto_failback": false, 00:30:36.847 "generate_uuids": false, 00:30:36.847 "transport_tos": 0, 00:30:36.847 "nvme_error_stat": false, 00:30:36.847 "rdma_srq_size": 0, 00:30:36.847 "io_path_stat": false, 00:30:36.847 "allow_accel_sequence": false, 00:30:36.847 "rdma_max_cq_size": 0, 00:30:36.847 "rdma_cm_event_timeout_ms": 0, 00:30:36.847 "dhchap_digests": [ 00:30:36.847 "sha256", 00:30:36.847 "sha384", 00:30:36.847 "sha512" 00:30:36.847 ], 00:30:36.847 "dhchap_dhgroups": [ 00:30:36.847 "null", 00:30:36.847 "ffdhe2048", 00:30:36.847 "ffdhe3072", 00:30:36.847 "ffdhe4096", 00:30:36.847 "ffdhe6144", 00:30:36.847 "ffdhe8192" 00:30:36.847 ] 00:30:36.847 } 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "method": "bdev_nvme_set_hotplug", 00:30:36.847 "params": { 00:30:36.847 "period_us": 100000, 00:30:36.847 "enable": false 00:30:36.847 } 00:30:36.847 }, 00:30:36.847 { 00:30:36.847 "method": "bdev_malloc_create", 00:30:36.847 "params": { 00:30:36.848 "name": "malloc0", 00:30:36.848 "num_blocks": 8192, 00:30:36.848 "block_size": 4096, 00:30:36.848 "physical_block_size": 4096, 00:30:36.848 "uuid": "cfa9119c-21be-4852-9b1d-19fd5521447a", 00:30:36.848 "optimal_io_boundary": 0, 00:30:36.848 "md_size": 0, 00:30:36.848 "dif_type": 0, 00:30:36.848 "dif_is_head_of_md": false, 00:30:36.848 "dif_pi_format": 0 00:30:36.848 } 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "method": "bdev_wait_for_examine" 00:30:36.848 } 00:30:36.848 ] 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "subsystem": "scsi", 00:30:36.848 "config": null 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "subsystem": "scheduler", 00:30:36.848 "config": [ 00:30:36.848 { 00:30:36.848 "method": "framework_set_scheduler", 00:30:36.848 "params": { 00:30:36.848 "name": "static" 00:30:36.848 } 00:30:36.848 } 00:30:36.848 ] 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "subsystem": "vhost_scsi", 00:30:36.848 "config": [] 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "subsystem": "vhost_blk", 00:30:36.848 "config": [] 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "subsystem": "ublk", 00:30:36.848 "config": [ 00:30:36.848 { 00:30:36.848 "method": "ublk_create_target", 00:30:36.848 "params": { 00:30:36.848 "cpumask": "1" 00:30:36.848 } 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "method": "ublk_start_disk", 00:30:36.848 "params": { 00:30:36.848 "bdev_name": "malloc0", 00:30:36.848 "ublk_id": 0, 00:30:36.848 "num_queues": 1, 00:30:36.848 "queue_depth": 128 00:30:36.848 } 00:30:36.848 } 00:30:36.848 ] 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "subsystem": "nbd", 00:30:36.848 "config": [] 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "subsystem": "nvmf", 00:30:36.848 "config": [ 00:30:36.848 { 00:30:36.848 "method": "nvmf_set_config", 00:30:36.848 "params": { 00:30:36.848 "discovery_filter": "match_any", 00:30:36.848 "admin_cmd_passthru": { 00:30:36.848 "identify_ctrlr": false 00:30:36.848 }, 00:30:36.848 "dhchap_digests": [ 00:30:36.848 "sha256", 00:30:36.848 "sha384", 00:30:36.848 "sha512" 00:30:36.848 ], 00:30:36.848 "dhchap_dhgroups": [ 00:30:36.848 "null", 00:30:36.848 "ffdhe2048", 00:30:36.848 "ffdhe3072", 00:30:36.848 "ffdhe4096", 00:30:36.848 "ffdhe6144", 00:30:36.848 "ffdhe8192" 00:30:36.848 ] 00:30:36.848 } 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "method": "nvmf_set_max_subsystems", 00:30:36.848 "params": { 00:30:36.848 "max_subsystems": 1024 00:30:36.848 } 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "method": "nvmf_set_crdt", 00:30:36.848 "params": { 00:30:36.848 "crdt1": 0, 00:30:36.848 "crdt2": 0, 00:30:36.848 "crdt3": 0 00:30:36.848 } 00:30:36.848 } 00:30:36.848 ] 00:30:36.848 }, 00:30:36.848 { 00:30:36.848 "subsystem": "iscsi", 00:30:36.848 "config": [ 00:30:36.848 { 00:30:36.848 "method": "iscsi_set_options", 00:30:36.848 "params": { 00:30:36.848 "node_base": "iqn.2016-06.io.spdk", 00:30:36.848 "max_sessions": 128, 00:30:36.848 "max_connections_per_session": 2, 00:30:36.848 "max_queue_depth": 64, 00:30:36.848 "default_time2wait": 2, 00:30:36.848 "default_time2retain": 20, 00:30:36.848 "first_burst_length": 8192, 00:30:36.848 "immediate_data": true, 00:30:36.848 "allow_duplicated_isid": false, 00:30:36.848 "error_recovery_level": 0, 00:30:36.848 "nop_timeout": 60, 00:30:36.848 "nop_in_interval": 30, 00:30:36.848 "disable_chap": false, 00:30:36.848 "require_chap": false, 00:30:36.848 "mutual_chap": false, 00:30:36.848 "chap_group": 0, 00:30:36.848 "max_large_datain_per_connection": 64, 00:30:36.848 "max_r2t_per_connection": 4, 00:30:36.848 "pdu_pool_size": 36864, 00:30:36.848 "immediate_data_pool_size": 16384, 00:30:36.848 "data_out_pool_size": 2048 00:30:36.848 } 00:30:36.848 } 00:30:36.848 ] 00:30:36.848 } 00:30:36.848 ] 00:30:36.848 }' 00:30:36.848 07:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73316 ']' 00:30:36.848 07:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.848 07:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.848 07:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.848 07:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.848 07:28:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:30:36.848 [2024-11-20 07:28:00.832703] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:36.848 [2024-11-20 07:28:00.832875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73316 ] 00:30:36.848 [2024-11-20 07:28:01.013989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.106 [2024-11-20 07:28:01.136223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.480 [2024-11-20 07:28:02.270845] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:30:38.480 [2024-11-20 07:28:02.272284] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:30:38.480 [2024-11-20 07:28:02.279006] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:30:38.480 [2024-11-20 07:28:02.279124] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:30:38.480 [2024-11-20 07:28:02.279139] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:30:38.480 [2024-11-20 07:28:02.279148] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:30:38.480 [2024-11-20 07:28:02.287923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:30:38.480 [2024-11-20 07:28:02.287963] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:30:38.480 [2024-11-20 07:28:02.294888] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:30:38.480 [2024-11-20 07:28:02.295023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:30:38.481 [2024-11-20 07:28:02.311865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73316 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73316 ']' 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73316 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73316 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:38.481 killing process with pid 73316 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73316' 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73316 00:30:38.481 07:28:02 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73316 00:30:40.399 [2024-11-20 07:28:04.108861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:30:40.399 [2024-11-20 07:28:04.136970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:40.399 [2024-11-20 07:28:04.137137] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:30:40.399 [2024-11-20 07:28:04.144885] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:40.399 [2024-11-20 07:28:04.144958] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:30:40.399 [2024-11-20 07:28:04.144969] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:30:40.399 [2024-11-20 07:28:04.145004] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:40.399 [2024-11-20 07:28:04.145192] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:42.301 07:28:06 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:30:42.301 00:30:42.301 real 0m11.048s 00:30:42.301 user 0m8.800s 00:30:42.301 sys 0m3.205s 00:30:42.301 ************************************ 00:30:42.301 END TEST test_save_ublk_config 00:30:42.301 ************************************ 00:30:42.302 07:28:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.302 07:28:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:30:42.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.302 07:28:06 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73412 00:30:42.302 07:28:06 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:42.302 07:28:06 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:30:42.302 07:28:06 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73412 00:30:42.302 07:28:06 ublk -- common/autotest_common.sh@835 -- # '[' -z 73412 ']' 00:30:42.302 07:28:06 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.302 07:28:06 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.302 07:28:06 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.302 07:28:06 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.302 07:28:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:30:42.302 [2024-11-20 07:28:06.316808] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:30:42.302 [2024-11-20 07:28:06.317286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73412 ] 00:30:42.560 [2024-11-20 07:28:06.512842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:42.560 [2024-11-20 07:28:06.637945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.560 [2024-11-20 07:28:06.637983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.493 07:28:07 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.493 07:28:07 ublk -- common/autotest_common.sh@868 -- # return 0 00:30:43.493 07:28:07 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:30:43.493 07:28:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:43.493 07:28:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.493 07:28:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:30:43.493 ************************************ 00:30:43.493 START TEST test_create_ublk 00:30:43.493 ************************************ 00:30:43.493 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:30:43.493 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:30:43.493 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.493 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:43.493 [2024-11-20 07:28:07.620848] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:30:43.493 [2024-11-20 07:28:07.623915] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:30:43.493 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.493 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:30:43.493 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:30:43.493 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.493 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:43.752 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.752 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:30:43.752 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:30:43.752 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.752 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:43.752 [2024-11-20 07:28:07.916056] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:30:43.752 [2024-11-20 07:28:07.916619] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:30:43.752 [2024-11-20 07:28:07.916647] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:30:43.752 [2024-11-20 07:28:07.916657] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:30:43.752 [2024-11-20 07:28:07.926907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:30:43.752 [2024-11-20 07:28:07.926946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:30:43.752 [2024-11-20 07:28:07.934876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:30:43.752 [2024-11-20 07:28:07.945926] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:30:44.012 [2024-11-20 07:28:07.967889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:30:44.012 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.012 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:30:44.012 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:30:44.012 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:30:44.012 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.012 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:44.012 07:28:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.012 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:30:44.012 { 00:30:44.012 "ublk_device": "/dev/ublkb0", 00:30:44.012 "id": 0, 00:30:44.012 "queue_depth": 512, 00:30:44.012 "num_queues": 4, 00:30:44.012 "bdev_name": "Malloc0" 00:30:44.012 } 00:30:44.012 ]' 00:30:44.012 07:28:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:30:44.012 07:28:08 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:30:44.012 07:28:08 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:30:44.271 fio: verification read phase will never start because write phase uses all of runtime 00:30:44.271 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:30:44.271 fio-3.35 00:30:44.271 Starting 1 process 00:30:54.287 00:30:54.287 fio_test: (groupid=0, jobs=1): err= 0: pid=73462: Wed Nov 20 07:28:18 2024 00:30:54.287 write: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(536MiB/10001msec); 0 zone resets 00:30:54.287 clat (usec): min=44, max=4081, avg=71.85, stdev=107.18 00:30:54.287 lat (usec): min=45, max=4082, avg=72.40, stdev=107.19 00:30:54.287 clat percentiles (usec): 00:30:54.287 | 1.00th=[ 49], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 63], 00:30:54.287 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 68], 00:30:54.287 | 70.00th=[ 69], 80.00th=[ 71], 90.00th=[ 76], 95.00th=[ 80], 00:30:54.287 | 99.00th=[ 93], 99.50th=[ 111], 99.90th=[ 2311], 99.95th=[ 2933], 00:30:54.287 | 99.99th=[ 3589] 00:30:54.287 bw ( KiB/s): min=52408, max=60128, per=100.00%, avg=55023.16, stdev=1630.90, samples=19 00:30:54.287 iops : min=13102, max=15032, avg=13755.79, stdev=407.73, samples=19 00:30:54.287 lat (usec) : 50=1.38%, 100=97.99%, 250=0.34%, 500=0.06%, 750=0.02% 00:30:54.287 lat (usec) : 1000=0.02% 00:30:54.287 lat (msec) : 2=0.07%, 4=0.12%, 10=0.01% 00:30:54.287 cpu : usr=2.67%, sys=11.23%, ctx=137334, majf=0, minf=795 00:30:54.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.287 issued rwts: total=0,137341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:54.287 00:30:54.287 Run status group 0 (all jobs): 00:30:54.287 WRITE: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=536MiB (563MB), run=10001-10001msec 00:30:54.287 00:30:54.287 Disk stats (read/write): 00:30:54.287 ublkb0: ios=0/135912, merge=0/0, ticks=0/8538, in_queue=8538, util=99.14% 00:30:54.287 07:28:18 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:30:54.287 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.287 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:54.287 [2024-11-20 07:28:18.469338] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:30:54.546 [2024-11-20 07:28:18.514412] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:54.546 [2024-11-20 07:28:18.515531] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:30:54.546 [2024-11-20 07:28:18.526934] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:54.546 [2024-11-20 07:28:18.531215] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:30:54.546 [2024-11-20 07:28:18.531251] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.546 07:28:18 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:54.546 [2024-11-20 07:28:18.541990] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:30:54.546 request: 00:30:54.546 { 00:30:54.546 "ublk_id": 0, 00:30:54.546 "method": "ublk_stop_disk", 00:30:54.546 "req_id": 1 00:30:54.546 } 00:30:54.546 Got JSON-RPC error response 00:30:54.546 response: 00:30:54.546 { 00:30:54.546 "code": -19, 00:30:54.546 "message": "No such device" 00:30:54.546 } 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:54.546 07:28:18 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:54.546 [2024-11-20 07:28:18.558008] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:54.546 [2024-11-20 07:28:18.566253] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:54.546 [2024-11-20 07:28:18.566334] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.546 07:28:18 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.546 07:28:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.512 07:28:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:30:55.512 07:28:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.512 07:28:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:30:55.512 07:28:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:30:55.512 07:28:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:30:55.512 07:28:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.512 07:28:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:30:55.512 07:28:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:30:55.512 ************************************ 00:30:55.512 END TEST test_create_ublk 00:30:55.512 ************************************ 00:30:55.512 07:28:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:30:55.512 00:30:55.512 real 0m12.017s 00:30:55.512 user 0m0.680s 00:30:55.512 sys 0m1.237s 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.512 07:28:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:55.512 07:28:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:30:55.512 07:28:19 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:55.512 07:28:19 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.512 07:28:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:30:55.512 ************************************ 00:30:55.512 START TEST test_create_multi_ublk 00:30:55.512 ************************************ 00:30:55.512 07:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:30:55.512 07:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:30:55.512 07:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.512 07:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:55.512 [2024-11-20 07:28:19.704846] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:30:55.512 [2024-11-20 07:28:19.708418] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:30:55.512 07:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.512 07:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:30:55.512 07:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:30:55.771 07:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:55.771 07:28:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:30:55.771 07:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.771 07:28:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:56.030 [2024-11-20 07:28:20.052031] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:30:56.030 [2024-11-20 07:28:20.052609] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:30:56.030 [2024-11-20 07:28:20.052627] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:30:56.030 [2024-11-20 07:28:20.052644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:30:56.030 [2024-11-20 07:28:20.064327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:30:56.030 [2024-11-20 07:28:20.064365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:30:56.030 [2024-11-20 07:28:20.070886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:30:56.030 [2024-11-20 07:28:20.071600] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:30:56.030 [2024-11-20 07:28:20.091897] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.030 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:56.289 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.289 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:30:56.289 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:30:56.289 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.289 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:56.289 [2024-11-20 07:28:20.461035] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:30:56.289 [2024-11-20 07:28:20.461614] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:30:56.289 [2024-11-20 07:28:20.461642] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:30:56.289 [2024-11-20 07:28:20.461652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:30:56.289 [2024-11-20 07:28:20.468887] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:30:56.289 [2024-11-20 07:28:20.468913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:30:56.289 [2024-11-20 07:28:20.476874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:30:56.289 [2024-11-20 07:28:20.477555] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:30:56.289 [2024-11-20 07:28:20.482931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:30:56.289 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.548 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:30:56.548 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:56.549 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:30:56.549 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.549 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:56.808 [2024-11-20 07:28:20.842078] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:30:56.808 [2024-11-20 07:28:20.842721] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:30:56.808 [2024-11-20 07:28:20.842740] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:30:56.808 [2024-11-20 07:28:20.842753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:30:56.808 [2024-11-20 07:28:20.849913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:30:56.808 [2024-11-20 07:28:20.849956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:30:56.808 [2024-11-20 07:28:20.857867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:30:56.808 [2024-11-20 07:28:20.858696] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:30:56.808 [2024-11-20 07:28:20.870006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.808 07:28:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:57.067 07:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.067 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:30:57.067 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:30:57.067 07:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.067 07:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:57.067 [2024-11-20 07:28:21.229065] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:30:57.067 [2024-11-20 07:28:21.229659] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:30:57.067 [2024-11-20 07:28:21.229677] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:30:57.067 [2024-11-20 07:28:21.229687] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:30:57.068 [2024-11-20 07:28:21.236928] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:30:57.068 [2024-11-20 07:28:21.236954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:30:57.068 [2024-11-20 07:28:21.244894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:30:57.068 [2024-11-20 07:28:21.245614] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:30:57.068 [2024-11-20 07:28:21.254344] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:30:57.068 07:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.068 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:30:57.068 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:30:57.068 07:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.068 07:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:30:57.327 { 00:30:57.327 "ublk_device": "/dev/ublkb0", 00:30:57.327 "id": 0, 00:30:57.327 "queue_depth": 512, 00:30:57.327 "num_queues": 4, 00:30:57.327 "bdev_name": "Malloc0" 00:30:57.327 }, 00:30:57.327 { 00:30:57.327 "ublk_device": "/dev/ublkb1", 00:30:57.327 "id": 1, 00:30:57.327 "queue_depth": 512, 00:30:57.327 "num_queues": 4, 00:30:57.327 "bdev_name": "Malloc1" 00:30:57.327 }, 00:30:57.327 { 00:30:57.327 "ublk_device": "/dev/ublkb2", 00:30:57.327 "id": 2, 00:30:57.327 "queue_depth": 512, 00:30:57.327 "num_queues": 4, 00:30:57.327 "bdev_name": "Malloc2" 00:30:57.327 }, 00:30:57.327 { 00:30:57.327 "ublk_device": "/dev/ublkb3", 00:30:57.327 "id": 3, 00:30:57.327 "queue_depth": 512, 00:30:57.327 "num_queues": 4, 00:30:57.327 "bdev_name": "Malloc3" 00:30:57.327 } 00:30:57.327 ]' 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:30:57.327 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:30:57.587 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:30:57.846 07:28:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:30:57.846 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:30:57.846 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:58.105 [2024-11-20 07:28:22.139025] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:30:58.105 [2024-11-20 07:28:22.176277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:58.105 [2024-11-20 07:28:22.177566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:30:58.105 [2024-11-20 07:28:22.183883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:58.105 [2024-11-20 07:28:22.184244] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:30:58.105 [2024-11-20 07:28:22.184267] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:58.105 [2024-11-20 07:28:22.199971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:30:58.105 [2024-11-20 07:28:22.238947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:58.105 [2024-11-20 07:28:22.239979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:30:58.105 [2024-11-20 07:28:22.246889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:58.105 [2024-11-20 07:28:22.247269] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:30:58.105 [2024-11-20 07:28:22.247290] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.105 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:58.105 [2024-11-20 07:28:22.262037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:30:58.105 [2024-11-20 07:28:22.301905] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:58.105 [2024-11-20 07:28:22.302942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:30:58.364 [2024-11-20 07:28:22.310916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:58.364 [2024-11-20 07:28:22.311293] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:30:58.364 [2024-11-20 07:28:22.311309] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:30:58.364 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.364 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:58.364 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:30:58.364 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.364 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:58.364 [2024-11-20 07:28:22.326004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:30:58.364 [2024-11-20 07:28:22.365917] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:58.364 [2024-11-20 07:28:22.366806] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:30:58.364 [2024-11-20 07:28:22.374911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:58.364 [2024-11-20 07:28:22.375270] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:30:58.364 [2024-11-20 07:28:22.375286] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:30:58.364 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.364 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:30:58.623 [2024-11-20 07:28:22.673963] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:58.623 [2024-11-20 07:28:22.681848] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:58.623 [2024-11-20 07:28:22.681913] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:30:58.623 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:30:58.623 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:58.623 07:28:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:58.623 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.623 07:28:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:59.559 07:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.559 07:28:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:59.559 07:28:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:59.559 07:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.559 07:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:30:59.818 07:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.818 07:28:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:30:59.818 07:28:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:30:59.818 07:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.818 07:28:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:31:00.385 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.385 07:28:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:31:00.385 07:28:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:31:00.385 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.385 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:31:00.643 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.643 07:28:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:31:00.643 07:28:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:31:00.643 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.643 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:31:00.643 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.643 07:28:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:31:00.643 07:28:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:31:00.644 07:28:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:31:00.644 07:28:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:31:00.644 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.644 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:31:00.903 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.903 07:28:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:31:00.903 07:28:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:31:00.903 ************************************ 00:31:00.903 END TEST test_create_multi_ublk 00:31:00.903 ************************************ 00:31:00.903 07:28:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:31:00.903 00:31:00.903 real 0m5.216s 00:31:00.903 user 0m1.124s 00:31:00.903 sys 0m0.218s 00:31:00.903 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.903 07:28:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:31:00.903 07:28:24 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:00.903 07:28:24 ublk -- ublk/ublk.sh@147 -- # cleanup 00:31:00.903 07:28:24 ublk -- ublk/ublk.sh@130 -- # killprocess 73412 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@954 -- # '[' -z 73412 ']' 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@958 -- # kill -0 73412 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@959 -- # uname 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73412 00:31:00.903 killing process with pid 73412 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73412' 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@973 -- # kill 73412 00:31:00.903 07:28:24 ublk -- common/autotest_common.sh@978 -- # wait 73412 00:31:02.281 [2024-11-20 07:28:26.272954] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:31:02.281 [2024-11-20 07:28:26.273026] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:31:03.660 00:31:03.660 real 0m32.832s 00:31:03.660 user 0m47.258s 00:31:03.660 sys 0m10.796s 00:31:03.660 07:28:27 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.660 ************************************ 00:31:03.660 END TEST ublk 00:31:03.660 ************************************ 00:31:03.660 07:28:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:31:03.660 07:28:27 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:31:03.660 07:28:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:03.660 07:28:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:03.660 07:28:27 -- common/autotest_common.sh@10 -- # set +x 00:31:03.660 ************************************ 00:31:03.660 START TEST ublk_recovery 00:31:03.660 ************************************ 00:31:03.660 07:28:27 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:31:03.661 * Looking for test storage... 00:31:03.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:31:03.661 07:28:27 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:03.661 07:28:27 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:31:03.661 07:28:27 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:03.919 07:28:27 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:31:03.919 07:28:27 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:03.920 07:28:27 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:03.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.920 --rc genhtml_branch_coverage=1 00:31:03.920 --rc genhtml_function_coverage=1 00:31:03.920 --rc genhtml_legend=1 00:31:03.920 --rc geninfo_all_blocks=1 00:31:03.920 --rc geninfo_unexecuted_blocks=1 00:31:03.920 00:31:03.920 ' 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:03.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.920 --rc genhtml_branch_coverage=1 00:31:03.920 --rc genhtml_function_coverage=1 00:31:03.920 --rc genhtml_legend=1 00:31:03.920 --rc geninfo_all_blocks=1 00:31:03.920 --rc geninfo_unexecuted_blocks=1 00:31:03.920 00:31:03.920 ' 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:03.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.920 --rc genhtml_branch_coverage=1 00:31:03.920 --rc genhtml_function_coverage=1 00:31:03.920 --rc genhtml_legend=1 00:31:03.920 --rc geninfo_all_blocks=1 00:31:03.920 --rc geninfo_unexecuted_blocks=1 00:31:03.920 00:31:03.920 ' 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:03.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:03.920 --rc genhtml_branch_coverage=1 00:31:03.920 --rc genhtml_function_coverage=1 00:31:03.920 --rc genhtml_legend=1 00:31:03.920 --rc geninfo_all_blocks=1 00:31:03.920 --rc geninfo_unexecuted_blocks=1 00:31:03.920 00:31:03.920 ' 00:31:03.920 07:28:27 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:31:03.920 07:28:27 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:31:03.920 07:28:27 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:31:03.920 07:28:27 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:31:03.920 07:28:27 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:31:03.920 07:28:27 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:31:03.920 07:28:27 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:31:03.920 07:28:27 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:31:03.920 07:28:27 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:31:03.920 07:28:27 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:31:03.920 07:28:27 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73848 00:31:03.920 07:28:27 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:31:03.920 07:28:27 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:03.920 07:28:27 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73848 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73848 ']' 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.920 07:28:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.920 [2024-11-20 07:28:28.049868] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:31:03.920 [2024-11-20 07:28:28.050011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73848 ] 00:31:04.179 [2024-11-20 07:28:28.235437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:04.438 [2024-11-20 07:28:28.382600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.438 [2024-11-20 07:28:28.382618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:31:05.460 07:28:29 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.460 [2024-11-20 07:28:29.318839] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:31:05.460 [2024-11-20 07:28:29.321605] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.460 07:28:29 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.460 malloc0 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.460 07:28:29 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:31:05.460 [2024-11-20 07:28:29.479009] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:31:05.460 [2024-11-20 07:28:29.479128] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:31:05.460 [2024-11-20 07:28:29.479143] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:31:05.460 [2024-11-20 07:28:29.479154] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:31:05.460 [2024-11-20 07:28:29.487940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:31:05.460 [2024-11-20 07:28:29.487968] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:31:05.460 [2024-11-20 07:28:29.494858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:31:05.460 [2024-11-20 07:28:29.495023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:31:05.460 [2024-11-20 07:28:29.509870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:31:05.460 1 00:31:05.460 07:28:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.460 07:28:29 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:31:06.397 07:28:30 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73883 00:31:06.397 07:28:30 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:31:06.397 07:28:30 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:31:06.656 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:06.656 fio-3.35 00:31:06.656 Starting 1 process 00:31:11.926 07:28:35 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73848 00:31:11.926 07:28:35 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:31:17.248 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73848 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:31:17.248 07:28:40 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73993 00:31:17.248 07:28:40 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:17.248 07:28:40 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73993 00:31:17.248 07:28:40 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:31:17.248 07:28:40 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73993 ']' 00:31:17.248 07:28:40 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.248 07:28:40 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.248 07:28:40 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.248 07:28:40 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.248 07:28:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.248 [2024-11-20 07:28:40.646659] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:31:17.248 [2024-11-20 07:28:40.647361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73993 ] 00:31:17.248 [2024-11-20 07:28:40.824433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:17.248 [2024-11-20 07:28:40.985664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.248 [2024-11-20 07:28:40.985699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.814 07:28:41 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.814 07:28:41 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:31:17.814 07:28:41 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:31:17.814 07:28:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.814 07:28:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.814 [2024-11-20 07:28:41.977860] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:31:17.814 [2024-11-20 07:28:41.981077] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:31:17.814 07:28:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.814 07:28:41 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:31:17.814 07:28:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.814 07:28:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.074 malloc0 00:31:18.074 07:28:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.074 07:28:42 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:31:18.074 07:28:42 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:18.074 07:28:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.074 [2024-11-20 07:28:42.151108] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:31:18.074 [2024-11-20 07:28:42.151187] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:31:18.074 [2024-11-20 07:28:42.151203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:31:18.074 [2024-11-20 07:28:42.158920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:31:18.074 [2024-11-20 07:28:42.158997] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:31:18.074 [2024-11-20 07:28:42.159012] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:31:18.074 [2024-11-20 07:28:42.159137] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:31:18.074 1 00:31:18.074 07:28:42 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:18.074 07:28:42 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73883 00:31:18.074 [2024-11-20 07:28:42.166943] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:31:18.074 [2024-11-20 07:28:42.174491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:31:18.074 [2024-11-20 07:28:42.181879] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:31:18.074 [2024-11-20 07:28:42.181932] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:32:14.302 00:32:14.302 fio_test: (groupid=0, jobs=1): err= 0: pid=73896: Wed Nov 20 07:29:30 2024 00:32:14.302 read: IOPS=18.3k, BW=71.5MiB/s (74.9MB/s)(4288MiB/60003msec) 00:32:14.302 slat (usec): min=2, max=300, avg= 7.22, stdev= 2.38 00:32:14.302 clat (usec): min=934, max=6667.4k, avg=3494.46, stdev=54703.06 00:32:14.302 lat (usec): min=939, max=6667.4k, avg=3501.68, stdev=54703.05 00:32:14.302 clat percentiles (usec): 00:32:14.302 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2671], 00:32:14.302 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 3032], 00:32:14.302 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3425], 95.00th=[ 4293], 00:32:14.302 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 7570], 99.95th=[ 8717], 00:32:14.302 | 99.99th=[13566] 00:32:14.302 bw ( KiB/s): min= 6280, max=99120, per=100.00%, avg=81471.36, stdev=11450.03, samples=107 00:32:14.302 iops : min= 1570, max=24780, avg=20367.82, stdev=2862.50, samples=107 00:32:14.302 write: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(4287MiB/60003msec); 0 zone resets 00:32:14.302 slat (usec): min=2, max=1483, avg= 7.41, stdev= 2.84 00:32:14.302 clat (usec): min=918, max=6667.6k, avg=3486.61, stdev=46741.53 00:32:14.302 lat (usec): min=924, max=6667.6k, avg=3494.02, stdev=46741.53 00:32:14.302 clat percentiles (usec): 00:32:14.303 | 1.00th=[ 2311], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2802], 00:32:14.303 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3064], 60.00th=[ 3195], 00:32:14.303 | 70.00th=[ 3294], 80.00th=[ 3359], 90.00th=[ 3523], 95.00th=[ 4228], 00:32:14.303 | 99.00th=[ 5735], 99.50th=[ 6325], 99.90th=[ 7701], 99.95th=[ 8717], 00:32:14.303 | 99.99th=[13435] 00:32:14.303 bw ( KiB/s): min= 5944, max=98288, per=100.00%, avg=81433.34, stdev=11397.12, samples=107 00:32:14.303 iops : min= 1486, max=24572, avg=20358.33, stdev=2849.28, samples=107 00:32:14.303 lat (usec) : 1000=0.01% 00:32:14.303 lat (msec) : 2=0.14%, 4=93.89%, 10=5.94%, 20=0.02%, >=2000=0.01% 00:32:14.303 cpu : usr=9.34%, sys=26.45%, ctx=73061, majf=0, minf=14 00:32:14.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:32:14.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:14.303 issued rwts: total=1097851,1097454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:14.303 00:32:14.303 Run status group 0 (all jobs): 00:32:14.303 READ: bw=71.5MiB/s (74.9MB/s), 71.5MiB/s-71.5MiB/s (74.9MB/s-74.9MB/s), io=4288MiB (4497MB), run=60003-60003msec 00:32:14.303 WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=4287MiB (4495MB), run=60003-60003msec 00:32:14.303 00:32:14.303 Disk stats (read/write): 00:32:14.303 ublkb1: ios=1095771/1095327, merge=0/0, ticks=3728976/3587512, in_queue=7316489, util=99.93% 00:32:14.303 07:29:30 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.303 [2024-11-20 07:29:30.794999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:32:14.303 [2024-11-20 07:29:30.830209] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:32:14.303 [2024-11-20 07:29:30.830465] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:32:14.303 [2024-11-20 07:29:30.837905] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:32:14.303 [2024-11-20 07:29:30.838092] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:32:14.303 [2024-11-20 07:29:30.838112] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.303 07:29:30 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.303 [2024-11-20 07:29:30.854056] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:32:14.303 [2024-11-20 07:29:30.863092] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:32:14.303 [2024-11-20 07:29:30.863176] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.303 07:29:30 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:14.303 07:29:30 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:32:14.303 07:29:30 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73993 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 73993 ']' 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 73993 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73993 00:32:14.303 killing process with pid 73993 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73993' 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@973 -- # kill 73993 00:32:14.303 07:29:30 ublk_recovery -- common/autotest_common.sh@978 -- # wait 73993 00:32:14.303 [2024-11-20 07:29:32.697369] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:32:14.303 [2024-11-20 07:29:32.697473] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:32:14.303 ************************************ 00:32:14.303 END TEST ublk_recovery 00:32:14.303 ************************************ 00:32:14.303 00:32:14.303 real 1m6.574s 00:32:14.303 user 1m48.734s 00:32:14.303 sys 0m35.140s 00:32:14.303 07:29:34 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.303 07:29:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:32:14.303 07:29:34 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:32:14.303 07:29:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:32:14.303 07:29:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.303 07:29:34 -- common/autotest_common.sh@10 -- # set +x 00:32:14.303 07:29:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:32:14.303 07:29:34 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:32:14.303 07:29:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.303 07:29:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.303 07:29:34 -- common/autotest_common.sh@10 -- # set +x 00:32:14.303 ************************************ 00:32:14.303 START TEST ftl 00:32:14.303 ************************************ 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:32:14.303 * Looking for test storage... 00:32:14.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.303 07:29:34 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.303 07:29:34 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.303 07:29:34 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.303 07:29:34 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.303 07:29:34 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.303 07:29:34 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.303 07:29:34 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.303 07:29:34 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.303 07:29:34 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.303 07:29:34 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.303 07:29:34 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.303 07:29:34 ftl -- scripts/common.sh@344 -- # case "$op" in 00:32:14.303 07:29:34 ftl -- scripts/common.sh@345 -- # : 1 00:32:14.303 07:29:34 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.303 07:29:34 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.303 07:29:34 ftl -- scripts/common.sh@365 -- # decimal 1 00:32:14.303 07:29:34 ftl -- scripts/common.sh@353 -- # local d=1 00:32:14.303 07:29:34 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.303 07:29:34 ftl -- scripts/common.sh@355 -- # echo 1 00:32:14.303 07:29:34 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.303 07:29:34 ftl -- scripts/common.sh@366 -- # decimal 2 00:32:14.303 07:29:34 ftl -- scripts/common.sh@353 -- # local d=2 00:32:14.303 07:29:34 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.303 07:29:34 ftl -- scripts/common.sh@355 -- # echo 2 00:32:14.303 07:29:34 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.303 07:29:34 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.303 07:29:34 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.303 07:29:34 ftl -- scripts/common.sh@368 -- # return 0 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.303 --rc genhtml_branch_coverage=1 00:32:14.303 --rc genhtml_function_coverage=1 00:32:14.303 --rc genhtml_legend=1 00:32:14.303 --rc geninfo_all_blocks=1 00:32:14.303 --rc geninfo_unexecuted_blocks=1 00:32:14.303 00:32:14.303 ' 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.303 --rc genhtml_branch_coverage=1 00:32:14.303 --rc genhtml_function_coverage=1 00:32:14.303 --rc genhtml_legend=1 00:32:14.303 --rc geninfo_all_blocks=1 00:32:14.303 --rc geninfo_unexecuted_blocks=1 00:32:14.303 00:32:14.303 ' 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.303 --rc genhtml_branch_coverage=1 00:32:14.303 --rc genhtml_function_coverage=1 00:32:14.303 --rc genhtml_legend=1 00:32:14.303 --rc geninfo_all_blocks=1 00:32:14.303 --rc geninfo_unexecuted_blocks=1 00:32:14.303 00:32:14.303 ' 00:32:14.303 07:29:34 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.303 --rc genhtml_branch_coverage=1 00:32:14.303 --rc genhtml_function_coverage=1 00:32:14.303 --rc genhtml_legend=1 00:32:14.303 --rc geninfo_all_blocks=1 00:32:14.303 --rc geninfo_unexecuted_blocks=1 00:32:14.303 00:32:14.303 ' 00:32:14.303 07:29:34 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:14.303 07:29:34 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:32:14.303 07:29:34 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:14.303 07:29:34 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:14.303 07:29:34 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:14.303 07:29:34 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:14.303 07:29:34 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:14.303 07:29:34 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:14.303 07:29:34 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:14.303 07:29:34 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:14.303 07:29:34 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:14.303 07:29:34 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:14.303 07:29:34 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:14.303 07:29:34 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:14.303 07:29:34 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:14.303 07:29:34 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:14.303 07:29:34 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:14.303 07:29:34 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:14.303 07:29:34 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:14.303 07:29:34 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:14.303 07:29:34 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:14.303 07:29:34 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:14.303 07:29:34 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:14.303 07:29:34 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:14.303 07:29:34 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:14.303 07:29:34 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:14.303 07:29:34 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:14.303 07:29:34 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.303 07:29:34 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:14.303 07:29:34 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:14.303 07:29:34 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:32:14.303 07:29:34 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:32:14.303 07:29:34 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:32:14.303 07:29:34 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:32:14.303 07:29:34 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:14.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:14.303 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:14.303 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:14.303 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:14.303 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:14.303 07:29:35 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74799 00:32:14.303 07:29:35 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:32:14.303 07:29:35 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74799 00:32:14.303 07:29:35 ftl -- common/autotest_common.sh@835 -- # '[' -z 74799 ']' 00:32:14.303 07:29:35 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.303 07:29:35 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.303 07:29:35 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.303 07:29:35 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.303 07:29:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:14.303 [2024-11-20 07:29:35.428447] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:14.303 [2024-11-20 07:29:35.428866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74799 ] 00:32:14.303 [2024-11-20 07:29:35.633088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.303 [2024-11-20 07:29:35.802098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.303 07:29:36 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.303 07:29:36 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:14.303 07:29:36 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:32:14.303 07:29:36 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:32:14.303 07:29:37 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:32:14.303 07:29:37 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@50 -- # break 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:32:14.303 07:29:38 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:32:14.304 07:29:38 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:32:14.562 07:29:38 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:32:14.562 07:29:38 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:32:14.562 07:29:38 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:32:14.562 07:29:38 ftl -- ftl/ftl.sh@63 -- # break 00:32:14.562 07:29:38 ftl -- ftl/ftl.sh@66 -- # killprocess 74799 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@954 -- # '[' -z 74799 ']' 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@958 -- # kill -0 74799 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@959 -- # uname 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74799 00:32:14.562 killing process with pid 74799 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74799' 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@973 -- # kill 74799 00:32:14.562 07:29:38 ftl -- common/autotest_common.sh@978 -- # wait 74799 00:32:17.114 07:29:41 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:32:17.114 07:29:41 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:32:17.115 07:29:41 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:17.115 07:29:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.115 07:29:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:17.115 ************************************ 00:32:17.115 START TEST ftl_fio_basic 00:32:17.115 ************************************ 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:32:17.115 * Looking for test storage... 00:32:17.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.115 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.374 --rc genhtml_branch_coverage=1 00:32:17.374 --rc genhtml_function_coverage=1 00:32:17.374 --rc genhtml_legend=1 00:32:17.374 --rc geninfo_all_blocks=1 00:32:17.374 --rc geninfo_unexecuted_blocks=1 00:32:17.374 00:32:17.374 ' 00:32:17.374 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.374 --rc genhtml_branch_coverage=1 00:32:17.374 --rc genhtml_function_coverage=1 00:32:17.375 --rc genhtml_legend=1 00:32:17.375 --rc geninfo_all_blocks=1 00:32:17.375 --rc geninfo_unexecuted_blocks=1 00:32:17.375 00:32:17.375 ' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:17.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.375 --rc genhtml_branch_coverage=1 00:32:17.375 --rc genhtml_function_coverage=1 00:32:17.375 --rc genhtml_legend=1 00:32:17.375 --rc geninfo_all_blocks=1 00:32:17.375 --rc geninfo_unexecuted_blocks=1 00:32:17.375 00:32:17.375 ' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:17.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.375 --rc genhtml_branch_coverage=1 00:32:17.375 --rc genhtml_function_coverage=1 00:32:17.375 --rc genhtml_legend=1 00:32:17.375 --rc geninfo_all_blocks=1 00:32:17.375 --rc geninfo_unexecuted_blocks=1 00:32:17.375 00:32:17.375 ' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74948 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74948 00:32:17.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 74948 ']' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.375 07:29:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:32:17.375 [2024-11-20 07:29:41.489896] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:32:17.375 [2024-11-20 07:29:41.490351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74948 ] 00:32:17.634 [2024-11-20 07:29:41.682104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:17.634 [2024-11-20 07:29:41.805847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.634 [2024-11-20 07:29:41.805923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.634 [2024-11-20 07:29:41.805941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.571 07:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.571 07:29:42 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:32:18.571 07:29:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:18.571 07:29:42 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:32:18.571 07:29:42 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:18.572 07:29:42 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:32:18.572 07:29:42 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:32:18.572 07:29:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:19.140 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:19.140 { 00:32:19.140 "name": "nvme0n1", 00:32:19.140 "aliases": [ 00:32:19.140 "f38e58d2-4103-4f80-b09b-7c59901379c2" 00:32:19.140 ], 00:32:19.140 "product_name": "NVMe disk", 00:32:19.140 "block_size": 4096, 00:32:19.140 "num_blocks": 1310720, 00:32:19.140 "uuid": "f38e58d2-4103-4f80-b09b-7c59901379c2", 00:32:19.140 "numa_id": -1, 00:32:19.140 "assigned_rate_limits": { 00:32:19.140 "rw_ios_per_sec": 0, 00:32:19.140 "rw_mbytes_per_sec": 0, 00:32:19.140 "r_mbytes_per_sec": 0, 00:32:19.140 "w_mbytes_per_sec": 0 00:32:19.140 }, 00:32:19.140 "claimed": false, 00:32:19.140 "zoned": false, 00:32:19.140 "supported_io_types": { 00:32:19.140 "read": true, 00:32:19.140 "write": true, 00:32:19.140 "unmap": true, 00:32:19.140 "flush": true, 00:32:19.140 "reset": true, 00:32:19.140 "nvme_admin": true, 00:32:19.140 "nvme_io": true, 00:32:19.140 "nvme_io_md": false, 00:32:19.140 "write_zeroes": true, 00:32:19.140 "zcopy": false, 00:32:19.140 "get_zone_info": false, 00:32:19.140 "zone_management": false, 00:32:19.140 "zone_append": false, 00:32:19.140 "compare": true, 00:32:19.140 "compare_and_write": false, 00:32:19.140 "abort": true, 00:32:19.140 "seek_hole": false, 00:32:19.140 "seek_data": false, 00:32:19.140 "copy": true, 00:32:19.140 "nvme_iov_md": false 00:32:19.140 }, 00:32:19.140 "driver_specific": { 00:32:19.140 "nvme": [ 00:32:19.140 { 00:32:19.140 "pci_address": "0000:00:11.0", 00:32:19.140 "trid": { 00:32:19.140 "trtype": "PCIe", 00:32:19.141 "traddr": "0000:00:11.0" 00:32:19.141 }, 00:32:19.141 "ctrlr_data": { 00:32:19.141 "cntlid": 0, 00:32:19.141 "vendor_id": "0x1b36", 00:32:19.141 "model_number": "QEMU NVMe Ctrl", 00:32:19.141 "serial_number": "12341", 00:32:19.141 "firmware_revision": "8.0.0", 00:32:19.141 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:19.141 "oacs": { 00:32:19.141 "security": 0, 00:32:19.141 "format": 1, 00:32:19.141 "firmware": 0, 00:32:19.141 "ns_manage": 1 00:32:19.141 }, 00:32:19.141 "multi_ctrlr": false, 00:32:19.141 "ana_reporting": false 00:32:19.141 }, 00:32:19.141 "vs": { 00:32:19.141 "nvme_version": "1.4" 00:32:19.141 }, 00:32:19.141 "ns_data": { 00:32:19.141 "id": 1, 00:32:19.141 "can_share": false 00:32:19.141 } 00:32:19.141 } 00:32:19.141 ], 00:32:19.141 "mp_policy": "active_passive" 00:32:19.141 } 00:32:19.141 } 00:32:19.141 ]' 00:32:19.141 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:19.400 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:19.659 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:32:19.659 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:19.918 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=42e51f2f-6c22-4b6c-b6c5-cac259e5603e 00:32:19.918 07:29:43 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 42e51f2f-6c22-4b6c-b6c5-cac259e5603e 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=71c9198d-c4ed-4311-a681-8233b9053f53 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 71c9198d-c4ed-4311-a681-8233b9053f53 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=71c9198d-c4ed-4311-a681-8233b9053f53 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 71c9198d-c4ed-4311-a681-8233b9053f53 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=71c9198d-c4ed-4311-a681-8233b9053f53 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:32:20.177 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71c9198d-c4ed-4311-a681-8233b9053f53 00:32:20.436 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:20.436 { 00:32:20.436 "name": "71c9198d-c4ed-4311-a681-8233b9053f53", 00:32:20.436 "aliases": [ 00:32:20.436 "lvs/nvme0n1p0" 00:32:20.436 ], 00:32:20.436 "product_name": "Logical Volume", 00:32:20.436 "block_size": 4096, 00:32:20.436 "num_blocks": 26476544, 00:32:20.436 "uuid": "71c9198d-c4ed-4311-a681-8233b9053f53", 00:32:20.436 "assigned_rate_limits": { 00:32:20.436 "rw_ios_per_sec": 0, 00:32:20.436 "rw_mbytes_per_sec": 0, 00:32:20.436 "r_mbytes_per_sec": 0, 00:32:20.436 "w_mbytes_per_sec": 0 00:32:20.436 }, 00:32:20.436 "claimed": false, 00:32:20.436 "zoned": false, 00:32:20.436 "supported_io_types": { 00:32:20.436 "read": true, 00:32:20.436 "write": true, 00:32:20.436 "unmap": true, 00:32:20.436 "flush": false, 00:32:20.436 "reset": true, 00:32:20.436 "nvme_admin": false, 00:32:20.436 "nvme_io": false, 00:32:20.437 "nvme_io_md": false, 00:32:20.437 "write_zeroes": true, 00:32:20.437 "zcopy": false, 00:32:20.437 "get_zone_info": false, 00:32:20.437 "zone_management": false, 00:32:20.437 "zone_append": false, 00:32:20.437 "compare": false, 00:32:20.437 "compare_and_write": false, 00:32:20.437 "abort": false, 00:32:20.437 "seek_hole": true, 00:32:20.437 "seek_data": true, 00:32:20.437 "copy": false, 00:32:20.437 "nvme_iov_md": false 00:32:20.437 }, 00:32:20.437 "driver_specific": { 00:32:20.437 "lvol": { 00:32:20.437 "lvol_store_uuid": "42e51f2f-6c22-4b6c-b6c5-cac259e5603e", 00:32:20.437 "base_bdev": "nvme0n1", 00:32:20.437 "thin_provision": true, 00:32:20.437 "num_allocated_clusters": 0, 00:32:20.437 "snapshot": false, 00:32:20.437 "clone": false, 00:32:20.437 "esnap_clone": false 00:32:20.437 } 00:32:20.437 } 00:32:20.437 } 00:32:20.437 ]' 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:32:20.437 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:20.696 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:20.696 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:20.696 07:29:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 71c9198d-c4ed-4311-a681-8233b9053f53 00:32:20.696 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=71c9198d-c4ed-4311-a681-8233b9053f53 00:32:20.696 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:20.696 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:32:20.696 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:32:20.696 07:29:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71c9198d-c4ed-4311-a681-8233b9053f53 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:21.263 { 00:32:21.263 "name": "71c9198d-c4ed-4311-a681-8233b9053f53", 00:32:21.263 "aliases": [ 00:32:21.263 "lvs/nvme0n1p0" 00:32:21.263 ], 00:32:21.263 "product_name": "Logical Volume", 00:32:21.263 "block_size": 4096, 00:32:21.263 "num_blocks": 26476544, 00:32:21.263 "uuid": "71c9198d-c4ed-4311-a681-8233b9053f53", 00:32:21.263 "assigned_rate_limits": { 00:32:21.263 "rw_ios_per_sec": 0, 00:32:21.263 "rw_mbytes_per_sec": 0, 00:32:21.263 "r_mbytes_per_sec": 0, 00:32:21.263 "w_mbytes_per_sec": 0 00:32:21.263 }, 00:32:21.263 "claimed": false, 00:32:21.263 "zoned": false, 00:32:21.263 "supported_io_types": { 00:32:21.263 "read": true, 00:32:21.263 "write": true, 00:32:21.263 "unmap": true, 00:32:21.263 "flush": false, 00:32:21.263 "reset": true, 00:32:21.263 "nvme_admin": false, 00:32:21.263 "nvme_io": false, 00:32:21.263 "nvme_io_md": false, 00:32:21.263 "write_zeroes": true, 00:32:21.263 "zcopy": false, 00:32:21.263 "get_zone_info": false, 00:32:21.263 "zone_management": false, 00:32:21.263 "zone_append": false, 00:32:21.263 "compare": false, 00:32:21.263 "compare_and_write": false, 00:32:21.263 "abort": false, 00:32:21.263 "seek_hole": true, 00:32:21.263 "seek_data": true, 00:32:21.263 "copy": false, 00:32:21.263 "nvme_iov_md": false 00:32:21.263 }, 00:32:21.263 "driver_specific": { 00:32:21.263 "lvol": { 00:32:21.263 "lvol_store_uuid": "42e51f2f-6c22-4b6c-b6c5-cac259e5603e", 00:32:21.263 "base_bdev": "nvme0n1", 00:32:21.263 "thin_provision": true, 00:32:21.263 "num_allocated_clusters": 0, 00:32:21.263 "snapshot": false, 00:32:21.263 "clone": false, 00:32:21.263 "esnap_clone": false 00:32:21.263 } 00:32:21.263 } 00:32:21.263 } 00:32:21.263 ]' 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:32:21.263 07:29:45 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:32:21.522 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 71c9198d-c4ed-4311-a681-8233b9053f53 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=71c9198d-c4ed-4311-a681-8233b9053f53 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:32:21.522 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71c9198d-c4ed-4311-a681-8233b9053f53 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:21.780 { 00:32:21.780 "name": "71c9198d-c4ed-4311-a681-8233b9053f53", 00:32:21.780 "aliases": [ 00:32:21.780 "lvs/nvme0n1p0" 00:32:21.780 ], 00:32:21.780 "product_name": "Logical Volume", 00:32:21.780 "block_size": 4096, 00:32:21.780 "num_blocks": 26476544, 00:32:21.780 "uuid": "71c9198d-c4ed-4311-a681-8233b9053f53", 00:32:21.780 "assigned_rate_limits": { 00:32:21.780 "rw_ios_per_sec": 0, 00:32:21.780 "rw_mbytes_per_sec": 0, 00:32:21.780 "r_mbytes_per_sec": 0, 00:32:21.780 "w_mbytes_per_sec": 0 00:32:21.780 }, 00:32:21.780 "claimed": false, 00:32:21.780 "zoned": false, 00:32:21.780 "supported_io_types": { 00:32:21.780 "read": true, 00:32:21.780 "write": true, 00:32:21.780 "unmap": true, 00:32:21.780 "flush": false, 00:32:21.780 "reset": true, 00:32:21.780 "nvme_admin": false, 00:32:21.780 "nvme_io": false, 00:32:21.780 "nvme_io_md": false, 00:32:21.780 "write_zeroes": true, 00:32:21.780 "zcopy": false, 00:32:21.780 "get_zone_info": false, 00:32:21.780 "zone_management": false, 00:32:21.780 "zone_append": false, 00:32:21.780 "compare": false, 00:32:21.780 "compare_and_write": false, 00:32:21.780 "abort": false, 00:32:21.780 "seek_hole": true, 00:32:21.780 "seek_data": true, 00:32:21.780 "copy": false, 00:32:21.780 "nvme_iov_md": false 00:32:21.780 }, 00:32:21.780 "driver_specific": { 00:32:21.780 "lvol": { 00:32:21.780 "lvol_store_uuid": "42e51f2f-6c22-4b6c-b6c5-cac259e5603e", 00:32:21.780 "base_bdev": "nvme0n1", 00:32:21.780 "thin_provision": true, 00:32:21.780 "num_allocated_clusters": 0, 00:32:21.780 "snapshot": false, 00:32:21.780 "clone": false, 00:32:21.780 "esnap_clone": false 00:32:21.780 } 00:32:21.780 } 00:32:21.780 } 00:32:21.780 ]' 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:32:21.780 07:29:45 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 71c9198d-c4ed-4311-a681-8233b9053f53 -c nvc0n1p0 --l2p_dram_limit 60 00:32:22.046 [2024-11-20 07:29:46.152235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.152296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:22.046 [2024-11-20 07:29:46.152317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:22.046 [2024-11-20 07:29:46.152329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.152409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.152425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:22.046 [2024-11-20 07:29:46.152439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:32:22.046 [2024-11-20 07:29:46.152450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.152500] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:22.046 [2024-11-20 07:29:46.153595] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:22.046 [2024-11-20 07:29:46.153632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.153644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:22.046 [2024-11-20 07:29:46.153658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:32:22.046 [2024-11-20 07:29:46.153669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.153842] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4c1a24bc-42aa-4a33-9851-d245b58dcbed 00:32:22.046 [2024-11-20 07:29:46.155386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.155428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:22.046 [2024-11-20 07:29:46.155447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:22.046 [2024-11-20 07:29:46.155462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.163062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.163107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:22.046 [2024-11-20 07:29:46.163124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.530 ms 00:32:22.046 [2024-11-20 07:29:46.163137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.163266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.163283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:22.046 [2024-11-20 07:29:46.163295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:32:22.046 [2024-11-20 07:29:46.163313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.163405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.163421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:22.046 [2024-11-20 07:29:46.163432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:22.046 [2024-11-20 07:29:46.163445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.163480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:22.046 [2024-11-20 07:29:46.168443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.168478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:22.046 [2024-11-20 07:29:46.168495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.967 ms 00:32:22.046 [2024-11-20 07:29:46.168509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.168554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.168566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:22.046 [2024-11-20 07:29:46.168584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:22.046 [2024-11-20 07:29:46.168594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.168648] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:22.046 [2024-11-20 07:29:46.168845] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:22.046 [2024-11-20 07:29:46.168917] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:22.046 [2024-11-20 07:29:46.168932] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:22.046 [2024-11-20 07:29:46.168950] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:22.046 [2024-11-20 07:29:46.168963] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:22.046 [2024-11-20 07:29:46.168978] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:22.046 [2024-11-20 07:29:46.168988] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:22.046 [2024-11-20 07:29:46.169000] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:22.046 [2024-11-20 07:29:46.169011] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:22.046 [2024-11-20 07:29:46.169024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.169037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:22.046 [2024-11-20 07:29:46.169052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:32:22.046 [2024-11-20 07:29:46.169063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.169159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.046 [2024-11-20 07:29:46.169171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:22.046 [2024-11-20 07:29:46.169183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:32:22.046 [2024-11-20 07:29:46.169193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.046 [2024-11-20 07:29:46.169315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:22.046 [2024-11-20 07:29:46.169330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:22.046 [2024-11-20 07:29:46.169346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:22.046 [2024-11-20 07:29:46.169357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.046 [2024-11-20 07:29:46.169370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:22.046 [2024-11-20 07:29:46.169379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:22.046 [2024-11-20 07:29:46.169391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:22.046 [2024-11-20 07:29:46.169401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:22.046 [2024-11-20 07:29:46.169413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:22.046 [2024-11-20 07:29:46.169422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:22.046 [2024-11-20 07:29:46.169434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:22.046 [2024-11-20 07:29:46.169444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:22.046 [2024-11-20 07:29:46.169456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:22.046 [2024-11-20 07:29:46.169465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:22.046 [2024-11-20 07:29:46.169477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:22.046 [2024-11-20 07:29:46.169486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.046 [2024-11-20 07:29:46.169502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:22.046 [2024-11-20 07:29:46.169511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:22.047 [2024-11-20 07:29:46.169523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:22.047 [2024-11-20 07:29:46.169556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:22.047 [2024-11-20 07:29:46.169578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:22.047 [2024-11-20 07:29:46.169587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:22.047 [2024-11-20 07:29:46.169608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:22.047 [2024-11-20 07:29:46.169620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:22.047 [2024-11-20 07:29:46.169641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:22.047 [2024-11-20 07:29:46.169650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:22.047 [2024-11-20 07:29:46.169671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:22.047 [2024-11-20 07:29:46.169685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:22.047 [2024-11-20 07:29:46.169707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:22.047 [2024-11-20 07:29:46.169731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:22.047 [2024-11-20 07:29:46.169743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:22.047 [2024-11-20 07:29:46.169752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:22.047 [2024-11-20 07:29:46.169764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:22.047 [2024-11-20 07:29:46.169774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:22.047 [2024-11-20 07:29:46.169795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:22.047 [2024-11-20 07:29:46.169809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169830] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:22.047 [2024-11-20 07:29:46.169843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:22.047 [2024-11-20 07:29:46.169853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:22.047 [2024-11-20 07:29:46.169866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:22.047 [2024-11-20 07:29:46.169876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:22.047 [2024-11-20 07:29:46.169891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:22.047 [2024-11-20 07:29:46.169900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:22.047 [2024-11-20 07:29:46.169913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:22.047 [2024-11-20 07:29:46.169924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:22.047 [2024-11-20 07:29:46.169936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:22.047 [2024-11-20 07:29:46.169952] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:22.047 [2024-11-20 07:29:46.169967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:22.047 [2024-11-20 07:29:46.169980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:22.047 [2024-11-20 07:29:46.169993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:22.047 [2024-11-20 07:29:46.170004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:22.047 [2024-11-20 07:29:46.170017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:22.047 [2024-11-20 07:29:46.170027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:22.047 [2024-11-20 07:29:46.170040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:22.047 [2024-11-20 07:29:46.170050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:22.047 [2024-11-20 07:29:46.170063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:22.047 [2024-11-20 07:29:46.170073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:22.047 [2024-11-20 07:29:46.170089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:22.047 [2024-11-20 07:29:46.170099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:22.047 [2024-11-20 07:29:46.170113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:22.047 [2024-11-20 07:29:46.170124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:22.047 [2024-11-20 07:29:46.170136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:22.047 [2024-11-20 07:29:46.170147] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:22.047 [2024-11-20 07:29:46.170161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:22.047 [2024-11-20 07:29:46.170175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:22.047 [2024-11-20 07:29:46.170188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:22.047 [2024-11-20 07:29:46.170198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:22.047 [2024-11-20 07:29:46.170211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:22.047 [2024-11-20 07:29:46.170222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:22.047 [2024-11-20 07:29:46.170235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:22.047 [2024-11-20 07:29:46.170246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:32:22.047 [2024-11-20 07:29:46.170258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:22.047 [2024-11-20 07:29:46.170330] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:22.047 [2024-11-20 07:29:46.170347] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:25.339 [2024-11-20 07:29:49.338504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.338576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:25.339 [2024-11-20 07:29:49.338598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3168.142 ms 00:32:25.339 [2024-11-20 07:29:49.338623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.339 [2024-11-20 07:29:49.380515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.380583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:25.339 [2024-11-20 07:29:49.380601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.543 ms 00:32:25.339 [2024-11-20 07:29:49.380617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.339 [2024-11-20 07:29:49.380799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.380835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:25.339 [2024-11-20 07:29:49.380850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:25.339 [2024-11-20 07:29:49.380885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.339 [2024-11-20 07:29:49.440486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.440558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:25.339 [2024-11-20 07:29:49.440579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.538 ms 00:32:25.339 [2024-11-20 07:29:49.440596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.339 [2024-11-20 07:29:49.440658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.440674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:25.339 [2024-11-20 07:29:49.440691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:25.339 [2024-11-20 07:29:49.440704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.339 [2024-11-20 07:29:49.441250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.441271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:25.339 [2024-11-20 07:29:49.441284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:32:25.339 [2024-11-20 07:29:49.441301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.339 [2024-11-20 07:29:49.441453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.441477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:25.339 [2024-11-20 07:29:49.441489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:32:25.339 [2024-11-20 07:29:49.441506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.339 [2024-11-20 07:29:49.463570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.463824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:25.339 [2024-11-20 07:29:49.463850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.030 ms 00:32:25.339 [2024-11-20 07:29:49.463864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.339 [2024-11-20 07:29:49.476985] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:25.339 [2024-11-20 07:29:49.493637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.339 [2024-11-20 07:29:49.493721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:25.339 [2024-11-20 07:29:49.493742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.618 ms 00:32:25.339 [2024-11-20 07:29:49.493756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.599 [2024-11-20 07:29:49.556562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.599 [2024-11-20 07:29:49.556808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:25.599 [2024-11-20 07:29:49.556854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.742 ms 00:32:25.599 [2024-11-20 07:29:49.556866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.599 [2024-11-20 07:29:49.557096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.599 [2024-11-20 07:29:49.557110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:25.599 [2024-11-20 07:29:49.557128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:32:25.599 [2024-11-20 07:29:49.557139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.599 [2024-11-20 07:29:49.595624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.599 [2024-11-20 07:29:49.595678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:25.599 [2024-11-20 07:29:49.595698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.408 ms 00:32:25.599 [2024-11-20 07:29:49.595710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.599 [2024-11-20 07:29:49.633907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.599 [2024-11-20 07:29:49.633949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:25.599 [2024-11-20 07:29:49.633968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.136 ms 00:32:25.599 [2024-11-20 07:29:49.633978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.599 [2024-11-20 07:29:49.634731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.599 [2024-11-20 07:29:49.634768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:25.599 [2024-11-20 07:29:49.634791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:32:25.599 [2024-11-20 07:29:49.634801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.599 [2024-11-20 07:29:49.736978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.599 [2024-11-20 07:29:49.737051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:25.599 [2024-11-20 07:29:49.737075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.082 ms 00:32:25.599 [2024-11-20 07:29:49.737091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.599 [2024-11-20 07:29:49.780247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.599 [2024-11-20 07:29:49.780311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:25.599 [2024-11-20 07:29:49.780331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.030 ms 00:32:25.599 [2024-11-20 07:29:49.780342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.858 [2024-11-20 07:29:49.820186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.858 [2024-11-20 07:29:49.820417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:25.858 [2024-11-20 07:29:49.820449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.776 ms 00:32:25.858 [2024-11-20 07:29:49.820461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.858 [2024-11-20 07:29:49.860224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.858 [2024-11-20 07:29:49.860284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:25.858 [2024-11-20 07:29:49.860306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.689 ms 00:32:25.858 [2024-11-20 07:29:49.860316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.858 [2024-11-20 07:29:49.860379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.858 [2024-11-20 07:29:49.860391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:25.858 [2024-11-20 07:29:49.860410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:25.858 [2024-11-20 07:29:49.860424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.858 [2024-11-20 07:29:49.860650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.858 [2024-11-20 07:29:49.860667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:25.858 [2024-11-20 07:29:49.860681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:32:25.858 [2024-11-20 07:29:49.860692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.858 [2024-11-20 07:29:49.862135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3709.320 ms, result 0 00:32:25.858 { 00:32:25.858 "name": "ftl0", 00:32:25.858 "uuid": "4c1a24bc-42aa-4a33-9851-d245b58dcbed" 00:32:25.858 } 00:32:25.858 07:29:49 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:32:25.858 07:29:49 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:32:25.858 07:29:49 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:25.858 07:29:49 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:32:25.858 07:29:49 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:25.858 07:29:49 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:25.858 07:29:49 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:26.118 07:29:50 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:32:26.378 [ 00:32:26.378 { 00:32:26.378 "name": "ftl0", 00:32:26.378 "aliases": [ 00:32:26.378 "4c1a24bc-42aa-4a33-9851-d245b58dcbed" 00:32:26.378 ], 00:32:26.378 "product_name": "FTL disk", 00:32:26.378 "block_size": 4096, 00:32:26.378 "num_blocks": 20971520, 00:32:26.378 "uuid": "4c1a24bc-42aa-4a33-9851-d245b58dcbed", 00:32:26.378 "assigned_rate_limits": { 00:32:26.378 "rw_ios_per_sec": 0, 00:32:26.378 "rw_mbytes_per_sec": 0, 00:32:26.378 "r_mbytes_per_sec": 0, 00:32:26.378 "w_mbytes_per_sec": 0 00:32:26.378 }, 00:32:26.378 "claimed": false, 00:32:26.378 "zoned": false, 00:32:26.378 "supported_io_types": { 00:32:26.378 "read": true, 00:32:26.378 "write": true, 00:32:26.378 "unmap": true, 00:32:26.378 "flush": true, 00:32:26.378 "reset": false, 00:32:26.378 "nvme_admin": false, 00:32:26.378 "nvme_io": false, 00:32:26.378 "nvme_io_md": false, 00:32:26.378 "write_zeroes": true, 00:32:26.378 "zcopy": false, 00:32:26.378 "get_zone_info": false, 00:32:26.378 "zone_management": false, 00:32:26.378 "zone_append": false, 00:32:26.378 "compare": false, 00:32:26.378 "compare_and_write": false, 00:32:26.378 "abort": false, 00:32:26.378 "seek_hole": false, 00:32:26.378 "seek_data": false, 00:32:26.378 "copy": false, 00:32:26.378 "nvme_iov_md": false 00:32:26.378 }, 00:32:26.378 "driver_specific": { 00:32:26.378 "ftl": { 00:32:26.378 "base_bdev": "71c9198d-c4ed-4311-a681-8233b9053f53", 00:32:26.378 "cache": "nvc0n1p0" 00:32:26.378 } 00:32:26.378 } 00:32:26.378 } 00:32:26.378 ] 00:32:26.378 07:29:50 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:32:26.378 07:29:50 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:32:26.378 07:29:50 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:26.637 07:29:50 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:32:26.637 07:29:50 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:26.637 [2024-11-20 07:29:50.790619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.637 [2024-11-20 07:29:50.790674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:26.637 [2024-11-20 07:29:50.790690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:26.637 [2024-11-20 07:29:50.790704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.637 [2024-11-20 07:29:50.790741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:26.637 [2024-11-20 07:29:50.795139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.637 [2024-11-20 07:29:50.795176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:26.637 [2024-11-20 07:29:50.795193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.373 ms 00:32:26.637 [2024-11-20 07:29:50.795204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.637 [2024-11-20 07:29:50.795689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.637 [2024-11-20 07:29:50.795707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:26.637 [2024-11-20 07:29:50.795721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:32:26.637 [2024-11-20 07:29:50.795732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.637 [2024-11-20 07:29:50.798322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.637 [2024-11-20 07:29:50.798480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:26.637 [2024-11-20 07:29:50.798507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.560 ms 00:32:26.637 [2024-11-20 07:29:50.798518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.637 [2024-11-20 07:29:50.803707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.637 [2024-11-20 07:29:50.803738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:26.637 [2024-11-20 07:29:50.803754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.144 ms 00:32:26.637 [2024-11-20 07:29:50.803764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.897 [2024-11-20 07:29:50.844108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.897 [2024-11-20 07:29:50.844160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:26.897 [2024-11-20 07:29:50.844181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.253 ms 00:32:26.897 [2024-11-20 07:29:50.844193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.897 [2024-11-20 07:29:50.867806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.897 [2024-11-20 07:29:50.867866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:26.897 [2024-11-20 07:29:50.867886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.530 ms 00:32:26.898 [2024-11-20 07:29:50.867901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.898 [2024-11-20 07:29:50.868150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.898 [2024-11-20 07:29:50.868166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:26.898 [2024-11-20 07:29:50.868180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:32:26.898 [2024-11-20 07:29:50.868190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.898 [2024-11-20 07:29:50.907085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.898 [2024-11-20 07:29:50.907150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:26.898 [2024-11-20 07:29:50.907169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.857 ms 00:32:26.898 [2024-11-20 07:29:50.907180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.898 [2024-11-20 07:29:50.945132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.898 [2024-11-20 07:29:50.945193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:26.898 [2024-11-20 07:29:50.945214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.883 ms 00:32:26.898 [2024-11-20 07:29:50.945225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.898 [2024-11-20 07:29:50.984335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.898 [2024-11-20 07:29:50.984394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:26.898 [2024-11-20 07:29:50.984414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.035 ms 00:32:26.898 [2024-11-20 07:29:50.984426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.898 [2024-11-20 07:29:51.022891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.898 [2024-11-20 07:29:51.022942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:26.898 [2024-11-20 07:29:51.022961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.303 ms 00:32:26.898 [2024-11-20 07:29:51.022972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.898 [2024-11-20 07:29:51.023029] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:26.898 [2024-11-20 07:29:51.023048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:26.898 [2024-11-20 07:29:51.023584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.023993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:26.899 [2024-11-20 07:29:51.024367] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:26.899 [2024-11-20 07:29:51.024380] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c1a24bc-42aa-4a33-9851-d245b58dcbed 00:32:26.899 [2024-11-20 07:29:51.024391] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:26.899 [2024-11-20 07:29:51.024406] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:26.899 [2024-11-20 07:29:51.024417] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:26.899 [2024-11-20 07:29:51.024434] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:26.899 [2024-11-20 07:29:51.024444] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:26.899 [2024-11-20 07:29:51.024457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:26.899 [2024-11-20 07:29:51.024473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:26.899 [2024-11-20 07:29:51.024485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:26.899 [2024-11-20 07:29:51.024494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:26.899 [2024-11-20 07:29:51.024507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.900 [2024-11-20 07:29:51.024518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:26.900 [2024-11-20 07:29:51.024532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:32:26.900 [2024-11-20 07:29:51.024542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.900 [2024-11-20 07:29:51.046032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.900 [2024-11-20 07:29:51.046081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:26.900 [2024-11-20 07:29:51.046098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.418 ms 00:32:26.900 [2024-11-20 07:29:51.046109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.900 [2024-11-20 07:29:51.046675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.900 [2024-11-20 07:29:51.046695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:26.900 [2024-11-20 07:29:51.046711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:32:26.900 [2024-11-20 07:29:51.046721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.159 [2024-11-20 07:29:51.118137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.159 [2024-11-20 07:29:51.118376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:27.159 [2024-11-20 07:29:51.118406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.159 [2024-11-20 07:29:51.118417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.159 [2024-11-20 07:29:51.118499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.159 [2024-11-20 07:29:51.118510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:27.159 [2024-11-20 07:29:51.118524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.159 [2024-11-20 07:29:51.118535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.159 [2024-11-20 07:29:51.118676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.159 [2024-11-20 07:29:51.118690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:27.159 [2024-11-20 07:29:51.118708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.159 [2024-11-20 07:29:51.118718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.159 [2024-11-20 07:29:51.118755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.159 [2024-11-20 07:29:51.118766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:27.159 [2024-11-20 07:29:51.118779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.160 [2024-11-20 07:29:51.118790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.160 [2024-11-20 07:29:51.258242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.160 [2024-11-20 07:29:51.258437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:27.160 [2024-11-20 07:29:51.258465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.160 [2024-11-20 07:29:51.258477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.419 [2024-11-20 07:29:51.366447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.419 [2024-11-20 07:29:51.366629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:27.419 [2024-11-20 07:29:51.366657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.419 [2024-11-20 07:29:51.366669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.419 [2024-11-20 07:29:51.366799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.419 [2024-11-20 07:29:51.366828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:27.419 [2024-11-20 07:29:51.366843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.419 [2024-11-20 07:29:51.366858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.419 [2024-11-20 07:29:51.366938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.419 [2024-11-20 07:29:51.366950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:27.419 [2024-11-20 07:29:51.366964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.419 [2024-11-20 07:29:51.366974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.419 [2024-11-20 07:29:51.367122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.419 [2024-11-20 07:29:51.367136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:27.419 [2024-11-20 07:29:51.367150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.419 [2024-11-20 07:29:51.367160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.419 [2024-11-20 07:29:51.367225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.419 [2024-11-20 07:29:51.367238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:27.419 [2024-11-20 07:29:51.367251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.419 [2024-11-20 07:29:51.367262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.419 [2024-11-20 07:29:51.367314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.419 [2024-11-20 07:29:51.367325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:27.419 [2024-11-20 07:29:51.367338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.419 [2024-11-20 07:29:51.367348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.419 [2024-11-20 07:29:51.367410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:27.419 [2024-11-20 07:29:51.367423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:27.419 [2024-11-20 07:29:51.367436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:27.419 [2024-11-20 07:29:51.367445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.419 [2024-11-20 07:29:51.367609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 576.956 ms, result 0 00:32:27.419 true 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74948 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 74948 ']' 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 74948 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74948 00:32:27.419 killing process with pid 74948 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74948' 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 74948 00:32:27.419 07:29:51 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 74948 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:32.695 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:32:32.696 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:32.696 07:29:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:32:32.696 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:32:32.696 fio-3.35 00:32:32.696 Starting 1 thread 00:32:37.966 00:32:37.966 test: (groupid=0, jobs=1): err= 0: pid=75172: Wed Nov 20 07:30:01 2024 00:32:37.966 read: IOPS=1019, BW=67.7MiB/s (71.0MB/s)(255MiB/3760msec) 00:32:37.966 slat (nsec): min=4276, max=30340, avg=6346.42, stdev=2656.78 00:32:37.966 clat (usec): min=275, max=973, avg=435.04, stdev=59.08 00:32:37.966 lat (usec): min=281, max=980, avg=441.38, stdev=59.58 00:32:37.966 clat percentiles (usec): 00:32:37.966 | 1.00th=[ 330], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 383], 00:32:37.966 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 441], 00:32:37.966 | 70.00th=[ 465], 80.00th=[ 490], 90.00th=[ 510], 95.00th=[ 529], 00:32:37.966 | 99.00th=[ 578], 99.50th=[ 611], 99.90th=[ 660], 99.95th=[ 758], 00:32:37.966 | 99.99th=[ 971] 00:32:37.966 write: IOPS=1026, BW=68.2MiB/s (71.5MB/s)(256MiB/3756msec); 0 zone resets 00:32:37.966 slat (nsec): min=16171, max=69242, avg=20903.22, stdev=4623.08 00:32:37.966 clat (usec): min=336, max=1210, avg=503.65, stdev=67.75 00:32:37.966 lat (usec): min=360, max=1244, avg=524.55, stdev=68.04 00:32:37.966 clat percentiles (usec): 00:32:37.966 | 1.00th=[ 367], 5.00th=[ 404], 10.00th=[ 429], 20.00th=[ 449], 00:32:37.966 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 502], 60.00th=[ 519], 00:32:37.966 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 611], 00:32:37.966 | 99.00th=[ 709], 99.50th=[ 758], 99.90th=[ 816], 99.95th=[ 857], 00:32:37.966 | 99.99th=[ 1205] 00:32:37.966 bw ( KiB/s): min=66096, max=74392, per=99.38%, avg=69379.43, stdev=2985.22, samples=7 00:32:37.966 iops : min= 972, max= 1094, avg=1020.29, stdev=43.90, samples=7 00:32:37.966 lat (usec) : 500=66.68%, 750=33.01%, 1000=0.30% 00:32:37.966 lat (msec) : 2=0.01% 00:32:37.966 cpu : usr=99.20%, sys=0.11%, ctx=8, majf=0, minf=1169 00:32:37.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.966 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:37.966 00:32:37.966 Run status group 0 (all jobs): 00:32:37.966 READ: bw=67.7MiB/s (71.0MB/s), 67.7MiB/s-67.7MiB/s (71.0MB/s-71.0MB/s), io=255MiB (267MB), run=3760-3760msec 00:32:37.966 WRITE: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=256MiB (269MB), run=3756-3756msec 00:32:39.873 ----------------------------------------------------- 00:32:39.873 Suppressions used: 00:32:39.873 count bytes template 00:32:39.873 1 5 /usr/src/fio/parse.c 00:32:39.873 1 8 libtcmalloc_minimal.so 00:32:39.873 1 904 libcrypto.so 00:32:39.873 ----------------------------------------------------- 00:32:39.873 00:32:39.873 07:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:32:39.873 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.873 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:39.874 07:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:32:39.874 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:32:39.874 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:32:39.874 fio-3.35 00:32:39.874 Starting 2 threads 00:33:12.002 00:33:12.002 first_half: (groupid=0, jobs=1): err= 0: pid=75275: Wed Nov 20 07:30:33 2024 00:33:12.002 read: IOPS=2382, BW=9528KiB/s (9757kB/s)(255MiB/27417msec) 00:33:12.002 slat (nsec): min=3798, max=35668, avg=6629.99, stdev=1918.03 00:33:12.002 clat (usec): min=409, max=396360, avg=40107.10, stdev=21845.98 00:33:12.002 lat (usec): min=418, max=396367, avg=40113.73, stdev=21846.15 00:33:12.002 clat percentiles (msec): 00:33:12.002 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:12.002 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:33:12.002 | 70.00th=[ 37], 80.00th=[ 42], 90.00th=[ 46], 95.00th=[ 59], 00:33:12.002 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 279], 99.95th=[ 326], 00:33:12.002 | 99.99th=[ 384] 00:33:12.002 write: IOPS=2671, BW=10.4MiB/s (10.9MB/s)(256MiB/24534msec); 0 zone resets 00:33:12.002 slat (usec): min=4, max=811, avg= 8.86, stdev= 5.95 00:33:12.002 clat (usec): min=433, max=116345, avg=13531.17, stdev=22055.71 00:33:12.002 lat (usec): min=446, max=116353, avg=13540.03, stdev=22056.04 00:33:12.002 clat percentiles (usec): 00:33:12.002 | 1.00th=[ 1057], 5.00th=[ 1500], 10.00th=[ 1860], 20.00th=[ 3032], 00:33:12.002 | 30.00th=[ 4555], 40.00th=[ 5800], 50.00th=[ 6783], 60.00th=[ 7963], 00:33:12.002 | 70.00th=[ 9372], 80.00th=[ 13435], 90.00th=[ 19268], 95.00th=[ 83362], 00:33:12.002 | 99.00th=[ 98042], 99.50th=[101188], 99.90th=[109577], 99.95th=[112722], 00:33:12.002 | 99.99th=[114820] 00:33:12.002 bw ( KiB/s): min= 168, max=43512, per=81.77%, avg=17475.17, stdev=10807.11, samples=30 00:33:12.002 iops : min= 42, max=10878, avg=4368.77, stdev=2701.78, samples=30 00:33:12.002 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.32% 00:33:12.002 lat (msec) : 2=5.59%, 4=7.38%, 10=23.19%, 20=9.37%, 50=46.40% 00:33:12.002 lat (msec) : 100=6.33%, 250=1.30%, 500=0.06% 00:33:12.002 cpu : usr=99.18%, sys=0.18%, ctx=42, majf=0, minf=5603 00:33:12.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:12.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:12.002 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:12.002 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:12.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:12.002 second_half: (groupid=0, jobs=1): err= 0: pid=75276: Wed Nov 20 07:30:33 2024 00:33:12.002 read: IOPS=2369, BW=9478KiB/s (9705kB/s)(255MiB/27510msec) 00:33:12.002 slat (nsec): min=3665, max=47759, avg=6535.84, stdev=1855.88 00:33:12.002 clat (usec): min=493, max=411138, avg=40209.70, stdev=23385.10 00:33:12.002 lat (usec): min=501, max=411146, avg=40216.24, stdev=23385.38 00:33:12.002 clat percentiles (msec): 00:33:12.002 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:33:12.002 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:33:12.002 | 70.00th=[ 36], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 57], 00:33:12.002 | 99.00th=[ 167], 99.50th=[ 199], 99.90th=[ 241], 99.95th=[ 255], 00:33:12.002 | 99.99th=[ 401] 00:33:12.002 write: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(256MiB/21865msec); 0 zone resets 00:33:12.002 slat (usec): min=4, max=335, avg= 8.86, stdev= 4.67 00:33:12.002 clat (usec): min=438, max=117218, avg=13713.40, stdev=22546.25 00:33:12.002 lat (usec): min=446, max=117225, avg=13722.26, stdev=22546.36 00:33:12.002 clat percentiles (usec): 00:33:12.002 | 1.00th=[ 1045], 5.00th=[ 1352], 10.00th=[ 1565], 20.00th=[ 1958], 00:33:12.002 | 30.00th=[ 2638], 40.00th=[ 4047], 50.00th=[ 5604], 60.00th=[ 7439], 00:33:12.002 | 70.00th=[ 11207], 80.00th=[ 14746], 90.00th=[ 34866], 95.00th=[ 83362], 00:33:12.002 | 99.00th=[ 96994], 99.50th=[100140], 99.90th=[109577], 99.95th=[110625], 00:33:12.002 | 99.99th=[114820] 00:33:12.002 bw ( KiB/s): min= 1232, max=47416, per=84.58%, avg=18075.52, stdev=11672.98, samples=29 00:33:12.002 iops : min= 308, max=11854, avg=4518.83, stdev=2918.26, samples=29 00:33:12.002 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.32% 00:33:12.002 lat (msec) : 2=10.14%, 4=9.63%, 10=15.01%, 20=10.12%, 50=47.42% 00:33:12.002 lat (msec) : 100=5.43%, 250=1.85%, 500=0.03% 00:33:12.002 cpu : usr=99.23%, sys=0.15%, ctx=51, majf=0, minf=5514 00:33:12.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:12.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:12.002 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:12.002 issued rwts: total=65185,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:12.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:12.002 00:33:12.002 Run status group 0 (all jobs): 00:33:12.002 READ: bw=18.5MiB/s (19.4MB/s), 9478KiB/s-9528KiB/s (9705kB/s-9757kB/s), io=510MiB (535MB), run=27417-27510msec 00:33:12.002 WRITE: bw=20.9MiB/s (21.9MB/s), 10.4MiB/s-11.7MiB/s (10.9MB/s-12.3MB/s), io=512MiB (537MB), run=21865-24534msec 00:33:12.002 ----------------------------------------------------- 00:33:12.002 Suppressions used: 00:33:12.002 count bytes template 00:33:12.002 2 10 /usr/src/fio/parse.c 00:33:12.002 4 384 /usr/src/fio/iolog.c 00:33:12.002 1 8 libtcmalloc_minimal.so 00:33:12.002 1 904 libcrypto.so 00:33:12.002 ----------------------------------------------------- 00:33:12.002 00:33:12.002 07:30:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:33:12.002 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.002 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:33:12.002 07:30:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:12.003 07:30:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:33:12.003 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:33:12.003 fio-3.35 00:33:12.003 Starting 1 thread 00:33:30.086 00:33:30.086 test: (groupid=0, jobs=1): err= 0: pid=75624: Wed Nov 20 07:30:51 2024 00:33:30.086 read: IOPS=7241, BW=28.3MiB/s (29.7MB/s)(255MiB/9004msec) 00:33:30.086 slat (usec): min=3, max=144, avg= 5.89, stdev= 2.14 00:33:30.086 clat (usec): min=694, max=34028, avg=17665.13, stdev=1354.73 00:33:30.086 lat (usec): min=698, max=34032, avg=17671.02, stdev=1354.74 00:33:30.086 clat percentiles (usec): 00:33:30.086 | 1.00th=[16188], 5.00th=[16581], 10.00th=[16712], 20.00th=[16909], 00:33:30.086 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17171], 60.00th=[17433], 00:33:30.086 | 70.00th=[17695], 80.00th=[17957], 90.00th=[19792], 95.00th=[20317], 00:33:30.086 | 99.00th=[22152], 99.50th=[24511], 99.90th=[26346], 99.95th=[30016], 00:33:30.086 | 99.99th=[33424] 00:33:30.086 write: IOPS=11.8k, BW=45.9MiB/s (48.2MB/s)(256MiB/5573msec); 0 zone resets 00:33:30.086 slat (usec): min=4, max=506, avg=10.00, stdev= 6.34 00:33:30.086 clat (usec): min=646, max=61728, avg=10826.65, stdev=13439.82 00:33:30.086 lat (usec): min=655, max=61736, avg=10836.65, stdev=13439.69 00:33:30.086 clat percentiles (usec): 00:33:30.086 | 1.00th=[ 930], 5.00th=[ 1106], 10.00th=[ 1221], 20.00th=[ 1401], 00:33:30.086 | 30.00th=[ 1598], 40.00th=[ 2089], 50.00th=[ 7308], 60.00th=[ 8455], 00:33:30.086 | 70.00th=[ 9765], 80.00th=[11469], 90.00th=[38011], 95.00th=[42730], 00:33:30.086 | 99.00th=[47449], 99.50th=[49021], 99.90th=[57410], 99.95th=[59507], 00:33:30.086 | 99.99th=[60556] 00:33:30.086 bw ( KiB/s): min= 5640, max=65704, per=92.88%, avg=43690.67, stdev=14637.64, samples=12 00:33:30.086 iops : min= 1410, max=16426, avg=10922.67, stdev=3659.41, samples=12 00:33:30.086 lat (usec) : 750=0.03%, 1000=1.03% 00:33:30.086 lat (msec) : 2=18.67%, 4=1.25%, 10=14.67%, 20=52.26%, 50=11.89% 00:33:30.086 lat (msec) : 100=0.19% 00:33:30.086 cpu : usr=98.49%, sys=0.51%, ctx=24, majf=0, minf=5565 00:33:30.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:30.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:30.086 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:30.086 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:30.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:30.086 00:33:30.086 Run status group 0 (all jobs): 00:33:30.086 READ: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=255MiB (267MB), run=9004-9004msec 00:33:30.086 WRITE: bw=45.9MiB/s (48.2MB/s), 45.9MiB/s-45.9MiB/s (48.2MB/s-48.2MB/s), io=256MiB (268MB), run=5573-5573msec 00:33:30.086 ----------------------------------------------------- 00:33:30.086 Suppressions used: 00:33:30.086 count bytes template 00:33:30.086 1 5 /usr/src/fio/parse.c 00:33:30.086 2 192 /usr/src/fio/iolog.c 00:33:30.086 1 8 libtcmalloc_minimal.so 00:33:30.086 1 904 libcrypto.so 00:33:30.086 ----------------------------------------------------- 00:33:30.086 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:30.086 Remove shared memory files 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58245 /dev/shm/spdk_tgt_trace.pid73848 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:33:30.086 ************************************ 00:33:30.086 END TEST ftl_fio_basic 00:33:30.086 ************************************ 00:33:30.086 00:33:30.086 real 1m12.985s 00:33:30.086 user 2m40.456s 00:33:30.086 sys 0m4.166s 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.086 07:30:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:33:30.086 07:30:54 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:33:30.086 07:30:54 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:30.086 07:30:54 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.086 07:30:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:30.086 ************************************ 00:33:30.086 START TEST ftl_bdevperf 00:33:30.086 ************************************ 00:33:30.086 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:33:30.086 * Looking for test storage... 00:33:30.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:30.086 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:30.086 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:33:30.086 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:30.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.346 --rc genhtml_branch_coverage=1 00:33:30.346 --rc genhtml_function_coverage=1 00:33:30.346 --rc genhtml_legend=1 00:33:30.346 --rc geninfo_all_blocks=1 00:33:30.346 --rc geninfo_unexecuted_blocks=1 00:33:30.346 00:33:30.346 ' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:30.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.346 --rc genhtml_branch_coverage=1 00:33:30.346 --rc genhtml_function_coverage=1 00:33:30.346 --rc genhtml_legend=1 00:33:30.346 --rc geninfo_all_blocks=1 00:33:30.346 --rc geninfo_unexecuted_blocks=1 00:33:30.346 00:33:30.346 ' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:30.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.346 --rc genhtml_branch_coverage=1 00:33:30.346 --rc genhtml_function_coverage=1 00:33:30.346 --rc genhtml_legend=1 00:33:30.346 --rc geninfo_all_blocks=1 00:33:30.346 --rc geninfo_unexecuted_blocks=1 00:33:30.346 00:33:30.346 ' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:30.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.346 --rc genhtml_branch_coverage=1 00:33:30.346 --rc genhtml_function_coverage=1 00:33:30.346 --rc genhtml_legend=1 00:33:30.346 --rc geninfo_all_blocks=1 00:33:30.346 --rc geninfo_unexecuted_blocks=1 00:33:30.346 00:33:30.346 ' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75879 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75879 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75879 ']' 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.346 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.347 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.347 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.347 07:30:54 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:30.347 [2024-11-20 07:30:54.502404] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:30.347 [2024-11-20 07:30:54.502742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75879 ] 00:33:30.606 [2024-11-20 07:30:54.676786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.606 [2024-11-20 07:30:54.797252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.544 07:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.544 07:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:31.544 07:30:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:31.544 07:30:55 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:33:31.544 07:30:55 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:31.544 07:30:55 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:33:31.544 07:30:55 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:33:31.544 07:30:55 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:31.803 07:30:55 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:31.803 07:30:55 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:33:31.803 07:30:55 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:31.803 07:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:33:31.803 07:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:31.803 07:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:33:31.803 07:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:33:31.803 07:30:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:32.062 { 00:33:32.062 "name": "nvme0n1", 00:33:32.062 "aliases": [ 00:33:32.062 "a0390e08-def7-4e0c-b3ad-0d027712313f" 00:33:32.062 ], 00:33:32.062 "product_name": "NVMe disk", 00:33:32.062 "block_size": 4096, 00:33:32.062 "num_blocks": 1310720, 00:33:32.062 "uuid": "a0390e08-def7-4e0c-b3ad-0d027712313f", 00:33:32.062 "numa_id": -1, 00:33:32.062 "assigned_rate_limits": { 00:33:32.062 "rw_ios_per_sec": 0, 00:33:32.062 "rw_mbytes_per_sec": 0, 00:33:32.062 "r_mbytes_per_sec": 0, 00:33:32.062 "w_mbytes_per_sec": 0 00:33:32.062 }, 00:33:32.062 "claimed": true, 00:33:32.062 "claim_type": "read_many_write_one", 00:33:32.062 "zoned": false, 00:33:32.062 "supported_io_types": { 00:33:32.062 "read": true, 00:33:32.062 "write": true, 00:33:32.062 "unmap": true, 00:33:32.062 "flush": true, 00:33:32.062 "reset": true, 00:33:32.062 "nvme_admin": true, 00:33:32.062 "nvme_io": true, 00:33:32.062 "nvme_io_md": false, 00:33:32.062 "write_zeroes": true, 00:33:32.062 "zcopy": false, 00:33:32.062 "get_zone_info": false, 00:33:32.062 "zone_management": false, 00:33:32.062 "zone_append": false, 00:33:32.062 "compare": true, 00:33:32.062 "compare_and_write": false, 00:33:32.062 "abort": true, 00:33:32.062 "seek_hole": false, 00:33:32.062 "seek_data": false, 00:33:32.062 "copy": true, 00:33:32.062 "nvme_iov_md": false 00:33:32.062 }, 00:33:32.062 "driver_specific": { 00:33:32.062 "nvme": [ 00:33:32.062 { 00:33:32.062 "pci_address": "0000:00:11.0", 00:33:32.062 "trid": { 00:33:32.062 "trtype": "PCIe", 00:33:32.062 "traddr": "0000:00:11.0" 00:33:32.062 }, 00:33:32.062 "ctrlr_data": { 00:33:32.062 "cntlid": 0, 00:33:32.062 "vendor_id": "0x1b36", 00:33:32.062 "model_number": "QEMU NVMe Ctrl", 00:33:32.062 "serial_number": "12341", 00:33:32.062 "firmware_revision": "8.0.0", 00:33:32.062 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:32.062 "oacs": { 00:33:32.062 "security": 0, 00:33:32.062 "format": 1, 00:33:32.062 "firmware": 0, 00:33:32.062 "ns_manage": 1 00:33:32.062 }, 00:33:32.062 "multi_ctrlr": false, 00:33:32.062 "ana_reporting": false 00:33:32.062 }, 00:33:32.062 "vs": { 00:33:32.062 "nvme_version": "1.4" 00:33:32.062 }, 00:33:32.062 "ns_data": { 00:33:32.062 "id": 1, 00:33:32.062 "can_share": false 00:33:32.062 } 00:33:32.062 } 00:33:32.062 ], 00:33:32.062 "mp_policy": "active_passive" 00:33:32.062 } 00:33:32.062 } 00:33:32.062 ]' 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:32.062 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:32.321 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=42e51f2f-6c22-4b6c-b6c5-cac259e5603e 00:33:32.321 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:33:32.321 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42e51f2f-6c22-4b6c-b6c5-cac259e5603e 00:33:32.580 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:32.838 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=02edfb5b-c296-444f-a620-7a57d417e158 00:33:32.838 07:30:56 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 02edfb5b-c296-444f-a620-7a57d417e158 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:33:33.098 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:33.357 { 00:33:33.357 "name": "fb5e0776-9921-41d1-b0b5-91b02d62e551", 00:33:33.357 "aliases": [ 00:33:33.357 "lvs/nvme0n1p0" 00:33:33.357 ], 00:33:33.357 "product_name": "Logical Volume", 00:33:33.357 "block_size": 4096, 00:33:33.357 "num_blocks": 26476544, 00:33:33.357 "uuid": "fb5e0776-9921-41d1-b0b5-91b02d62e551", 00:33:33.357 "assigned_rate_limits": { 00:33:33.357 "rw_ios_per_sec": 0, 00:33:33.357 "rw_mbytes_per_sec": 0, 00:33:33.357 "r_mbytes_per_sec": 0, 00:33:33.357 "w_mbytes_per_sec": 0 00:33:33.357 }, 00:33:33.357 "claimed": false, 00:33:33.357 "zoned": false, 00:33:33.357 "supported_io_types": { 00:33:33.357 "read": true, 00:33:33.357 "write": true, 00:33:33.357 "unmap": true, 00:33:33.357 "flush": false, 00:33:33.357 "reset": true, 00:33:33.357 "nvme_admin": false, 00:33:33.357 "nvme_io": false, 00:33:33.357 "nvme_io_md": false, 00:33:33.357 "write_zeroes": true, 00:33:33.357 "zcopy": false, 00:33:33.357 "get_zone_info": false, 00:33:33.357 "zone_management": false, 00:33:33.357 "zone_append": false, 00:33:33.357 "compare": false, 00:33:33.357 "compare_and_write": false, 00:33:33.357 "abort": false, 00:33:33.357 "seek_hole": true, 00:33:33.357 "seek_data": true, 00:33:33.357 "copy": false, 00:33:33.357 "nvme_iov_md": false 00:33:33.357 }, 00:33:33.357 "driver_specific": { 00:33:33.357 "lvol": { 00:33:33.357 "lvol_store_uuid": "02edfb5b-c296-444f-a620-7a57d417e158", 00:33:33.357 "base_bdev": "nvme0n1", 00:33:33.357 "thin_provision": true, 00:33:33.357 "num_allocated_clusters": 0, 00:33:33.357 "snapshot": false, 00:33:33.357 "clone": false, 00:33:33.357 "esnap_clone": false 00:33:33.357 } 00:33:33.357 } 00:33:33.357 } 00:33:33.357 ]' 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:33:33.357 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:33.616 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:33.616 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:33.616 07:30:57 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.616 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.616 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:33.616 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:33:33.616 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:33:33.616 07:30:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:33.885 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:33.885 { 00:33:33.885 "name": "fb5e0776-9921-41d1-b0b5-91b02d62e551", 00:33:33.885 "aliases": [ 00:33:33.885 "lvs/nvme0n1p0" 00:33:33.885 ], 00:33:33.885 "product_name": "Logical Volume", 00:33:33.885 "block_size": 4096, 00:33:33.885 "num_blocks": 26476544, 00:33:33.885 "uuid": "fb5e0776-9921-41d1-b0b5-91b02d62e551", 00:33:33.885 "assigned_rate_limits": { 00:33:33.885 "rw_ios_per_sec": 0, 00:33:33.885 "rw_mbytes_per_sec": 0, 00:33:33.885 "r_mbytes_per_sec": 0, 00:33:33.885 "w_mbytes_per_sec": 0 00:33:33.885 }, 00:33:33.885 "claimed": false, 00:33:33.885 "zoned": false, 00:33:33.885 "supported_io_types": { 00:33:33.885 "read": true, 00:33:33.885 "write": true, 00:33:33.885 "unmap": true, 00:33:33.885 "flush": false, 00:33:33.885 "reset": true, 00:33:33.885 "nvme_admin": false, 00:33:33.885 "nvme_io": false, 00:33:33.885 "nvme_io_md": false, 00:33:33.885 "write_zeroes": true, 00:33:33.885 "zcopy": false, 00:33:33.885 "get_zone_info": false, 00:33:33.885 "zone_management": false, 00:33:33.885 "zone_append": false, 00:33:33.885 "compare": false, 00:33:33.885 "compare_and_write": false, 00:33:33.885 "abort": false, 00:33:33.885 "seek_hole": true, 00:33:33.885 "seek_data": true, 00:33:33.885 "copy": false, 00:33:33.885 "nvme_iov_md": false 00:33:33.885 }, 00:33:33.885 "driver_specific": { 00:33:33.885 "lvol": { 00:33:33.885 "lvol_store_uuid": "02edfb5b-c296-444f-a620-7a57d417e158", 00:33:33.885 "base_bdev": "nvme0n1", 00:33:33.885 "thin_provision": true, 00:33:33.885 "num_allocated_clusters": 0, 00:33:33.885 "snapshot": false, 00:33:33.885 "clone": false, 00:33:33.885 "esnap_clone": false 00:33:33.885 } 00:33:33.885 } 00:33:33.885 } 00:33:33.885 ]' 00:33:33.885 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:34.144 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:33:34.144 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:34.144 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:34.144 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:34.144 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:33:34.144 07:30:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:33:34.144 07:30:58 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:34.402 07:30:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:33:34.402 07:30:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:34.402 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:34.402 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:34.402 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:33:34.402 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:33:34.402 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb5e0776-9921-41d1-b0b5-91b02d62e551 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:34.661 { 00:33:34.661 "name": "fb5e0776-9921-41d1-b0b5-91b02d62e551", 00:33:34.661 "aliases": [ 00:33:34.661 "lvs/nvme0n1p0" 00:33:34.661 ], 00:33:34.661 "product_name": "Logical Volume", 00:33:34.661 "block_size": 4096, 00:33:34.661 "num_blocks": 26476544, 00:33:34.661 "uuid": "fb5e0776-9921-41d1-b0b5-91b02d62e551", 00:33:34.661 "assigned_rate_limits": { 00:33:34.661 "rw_ios_per_sec": 0, 00:33:34.661 "rw_mbytes_per_sec": 0, 00:33:34.661 "r_mbytes_per_sec": 0, 00:33:34.661 "w_mbytes_per_sec": 0 00:33:34.661 }, 00:33:34.661 "claimed": false, 00:33:34.661 "zoned": false, 00:33:34.661 "supported_io_types": { 00:33:34.661 "read": true, 00:33:34.661 "write": true, 00:33:34.661 "unmap": true, 00:33:34.661 "flush": false, 00:33:34.661 "reset": true, 00:33:34.661 "nvme_admin": false, 00:33:34.661 "nvme_io": false, 00:33:34.661 "nvme_io_md": false, 00:33:34.661 "write_zeroes": true, 00:33:34.661 "zcopy": false, 00:33:34.661 "get_zone_info": false, 00:33:34.661 "zone_management": false, 00:33:34.661 "zone_append": false, 00:33:34.661 "compare": false, 00:33:34.661 "compare_and_write": false, 00:33:34.661 "abort": false, 00:33:34.661 "seek_hole": true, 00:33:34.661 "seek_data": true, 00:33:34.661 "copy": false, 00:33:34.661 "nvme_iov_md": false 00:33:34.661 }, 00:33:34.661 "driver_specific": { 00:33:34.661 "lvol": { 00:33:34.661 "lvol_store_uuid": "02edfb5b-c296-444f-a620-7a57d417e158", 00:33:34.661 "base_bdev": "nvme0n1", 00:33:34.661 "thin_provision": true, 00:33:34.661 "num_allocated_clusters": 0, 00:33:34.661 "snapshot": false, 00:33:34.661 "clone": false, 00:33:34.661 "esnap_clone": false 00:33:34.661 } 00:33:34.661 } 00:33:34.661 } 00:33:34.661 ]' 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:33:34.661 07:30:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fb5e0776-9921-41d1-b0b5-91b02d62e551 -c nvc0n1p0 --l2p_dram_limit 20 00:33:34.928 [2024-11-20 07:30:59.075070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.075125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:34.928 [2024-11-20 07:30:59.075141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:34.928 [2024-11-20 07:30:59.075157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.075218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.075236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:34.928 [2024-11-20 07:30:59.075248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:34.928 [2024-11-20 07:30:59.075260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.075280] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:34.928 [2024-11-20 07:30:59.076410] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:34.928 [2024-11-20 07:30:59.076434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.076449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:34.928 [2024-11-20 07:30:59.076461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:33:34.928 [2024-11-20 07:30:59.076476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.076558] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8dcd8e80-fb40-424b-9cfc-394d2e5936cf 00:33:34.928 [2024-11-20 07:30:59.078072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.078108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:34.928 [2024-11-20 07:30:59.078124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:34.928 [2024-11-20 07:30:59.078140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.085869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.085901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:34.928 [2024-11-20 07:30:59.085917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.681 ms 00:33:34.928 [2024-11-20 07:30:59.085927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.086037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.086052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:34.928 [2024-11-20 07:30:59.086070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:33:34.928 [2024-11-20 07:30:59.086080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.086147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.086159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:34.928 [2024-11-20 07:30:59.086173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:34.928 [2024-11-20 07:30:59.086183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.086210] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:34.928 [2024-11-20 07:30:59.091488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.091521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:34.928 [2024-11-20 07:30:59.091534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.287 ms 00:33:34.928 [2024-11-20 07:30:59.091548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.091584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.091598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:34.928 [2024-11-20 07:30:59.091609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:34.928 [2024-11-20 07:30:59.091621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.091665] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:34.928 [2024-11-20 07:30:59.091823] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:34.928 [2024-11-20 07:30:59.091855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:34.928 [2024-11-20 07:30:59.091873] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:34.928 [2024-11-20 07:30:59.091888] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:34.928 [2024-11-20 07:30:59.091903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:34.928 [2024-11-20 07:30:59.091916] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:34.928 [2024-11-20 07:30:59.091930] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:34.928 [2024-11-20 07:30:59.091940] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:34.928 [2024-11-20 07:30:59.091954] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:34.928 [2024-11-20 07:30:59.091965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.091983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:34.928 [2024-11-20 07:30:59.091994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:33:34.928 [2024-11-20 07:30:59.092008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.928 [2024-11-20 07:30:59.092087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.928 [2024-11-20 07:30:59.092104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:34.928 [2024-11-20 07:30:59.092115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:33:34.929 [2024-11-20 07:30:59.092131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.929 [2024-11-20 07:30:59.092219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:34.929 [2024-11-20 07:30:59.092238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:34.929 [2024-11-20 07:30:59.092253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:34.929 [2024-11-20 07:30:59.092381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:34.929 [2024-11-20 07:30:59.092415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:34.929 [2024-11-20 07:30:59.092441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:34.929 [2024-11-20 07:30:59.092454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:34.929 [2024-11-20 07:30:59.092464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:34.929 [2024-11-20 07:30:59.092489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:34.929 [2024-11-20 07:30:59.092500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:34.929 [2024-11-20 07:30:59.092517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:34.929 [2024-11-20 07:30:59.092540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:34.929 [2024-11-20 07:30:59.092576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:34.929 [2024-11-20 07:30:59.092612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:34.929 [2024-11-20 07:30:59.092645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:34.929 [2024-11-20 07:30:59.092680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:34.929 [2024-11-20 07:30:59.092716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:34.929 [2024-11-20 07:30:59.092739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:34.929 [2024-11-20 07:30:59.092752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:34.929 [2024-11-20 07:30:59.092762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:34.929 [2024-11-20 07:30:59.092775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:34.929 [2024-11-20 07:30:59.092786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:34.929 [2024-11-20 07:30:59.092799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:34.929 [2024-11-20 07:30:59.092835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:34.929 [2024-11-20 07:30:59.092846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092858] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:34.929 [2024-11-20 07:30:59.092870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:34.929 [2024-11-20 07:30:59.092884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.929 [2024-11-20 07:30:59.092914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:34.929 [2024-11-20 07:30:59.092925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:34.929 [2024-11-20 07:30:59.092938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:34.929 [2024-11-20 07:30:59.092949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:34.929 [2024-11-20 07:30:59.092962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:34.929 [2024-11-20 07:30:59.092972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:34.929 [2024-11-20 07:30:59.092990] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:34.929 [2024-11-20 07:30:59.093003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:34.929 [2024-11-20 07:30:59.093018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:34.929 [2024-11-20 07:30:59.093031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:34.929 [2024-11-20 07:30:59.093045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:34.929 [2024-11-20 07:30:59.093066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:34.929 [2024-11-20 07:30:59.093080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:34.929 [2024-11-20 07:30:59.093090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:34.929 [2024-11-20 07:30:59.093103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:34.929 [2024-11-20 07:30:59.093114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:34.929 [2024-11-20 07:30:59.093129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:34.929 [2024-11-20 07:30:59.093139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:34.929 [2024-11-20 07:30:59.093152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:34.929 [2024-11-20 07:30:59.093162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:34.929 [2024-11-20 07:30:59.093176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:34.929 [2024-11-20 07:30:59.093186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:34.929 [2024-11-20 07:30:59.093199] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:34.929 [2024-11-20 07:30:59.093210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:34.929 [2024-11-20 07:30:59.093225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:34.929 [2024-11-20 07:30:59.093236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:34.929 [2024-11-20 07:30:59.093251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:34.929 [2024-11-20 07:30:59.093262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:34.929 [2024-11-20 07:30:59.093275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.929 [2024-11-20 07:30:59.093289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:34.930 [2024-11-20 07:30:59.093302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:33:34.930 [2024-11-20 07:30:59.093312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.930 [2024-11-20 07:30:59.093354] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:34.930 [2024-11-20 07:30:59.093367] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:39.156 [2024-11-20 07:31:03.192873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.192937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:39.156 [2024-11-20 07:31:03.192962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4099.478 ms 00:33:39.156 [2024-11-20 07:31:03.192974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.156 [2024-11-20 07:31:03.234079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.234133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:39.156 [2024-11-20 07:31:03.234152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.742 ms 00:33:39.156 [2024-11-20 07:31:03.234164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.156 [2024-11-20 07:31:03.234326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.234339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:39.156 [2024-11-20 07:31:03.234356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:39.156 [2024-11-20 07:31:03.234367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.156 [2024-11-20 07:31:03.297974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.298033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:39.156 [2024-11-20 07:31:03.298054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.560 ms 00:33:39.156 [2024-11-20 07:31:03.298067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.156 [2024-11-20 07:31:03.298123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.298139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:39.156 [2024-11-20 07:31:03.298155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:39.156 [2024-11-20 07:31:03.298166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.156 [2024-11-20 07:31:03.298699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.298716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:39.156 [2024-11-20 07:31:03.298731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:33:39.156 [2024-11-20 07:31:03.298743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.156 [2024-11-20 07:31:03.298882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.298907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:39.156 [2024-11-20 07:31:03.298923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:33:39.156 [2024-11-20 07:31:03.298934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.156 [2024-11-20 07:31:03.319525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.319572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:39.156 [2024-11-20 07:31:03.319606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.568 ms 00:33:39.156 [2024-11-20 07:31:03.319617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.156 [2024-11-20 07:31:03.332190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:33:39.156 [2024-11-20 07:31:03.338145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.156 [2024-11-20 07:31:03.338185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:39.156 [2024-11-20 07:31:03.338200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.411 ms 00:33:39.156 [2024-11-20 07:31:03.338214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.415 [2024-11-20 07:31:03.419321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.415 [2024-11-20 07:31:03.419383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:39.415 [2024-11-20 07:31:03.419401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.060 ms 00:33:39.415 [2024-11-20 07:31:03.419417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.415 [2024-11-20 07:31:03.419625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.415 [2024-11-20 07:31:03.419647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:39.415 [2024-11-20 07:31:03.419659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:33:39.415 [2024-11-20 07:31:03.419674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.415 [2024-11-20 07:31:03.459406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.415 [2024-11-20 07:31:03.459464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:39.415 [2024-11-20 07:31:03.459481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.665 ms 00:33:39.415 [2024-11-20 07:31:03.459494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.415 [2024-11-20 07:31:03.496915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.415 [2024-11-20 07:31:03.496965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:39.415 [2024-11-20 07:31:03.496981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.371 ms 00:33:39.415 [2024-11-20 07:31:03.496994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.415 [2024-11-20 07:31:03.497803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.415 [2024-11-20 07:31:03.497843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:39.415 [2024-11-20 07:31:03.497857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:33:39.415 [2024-11-20 07:31:03.497871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.674 [2024-11-20 07:31:03.615888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.674 [2024-11-20 07:31:03.615962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:39.674 [2024-11-20 07:31:03.615979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.955 ms 00:33:39.674 [2024-11-20 07:31:03.615993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.674 [2024-11-20 07:31:03.655989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.674 [2024-11-20 07:31:03.656069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:39.674 [2024-11-20 07:31:03.656086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.865 ms 00:33:39.674 [2024-11-20 07:31:03.656103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.675 [2024-11-20 07:31:03.695106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.675 [2024-11-20 07:31:03.695181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:39.675 [2024-11-20 07:31:03.695196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.951 ms 00:33:39.675 [2024-11-20 07:31:03.695209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.675 [2024-11-20 07:31:03.732562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.675 [2024-11-20 07:31:03.732624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:39.675 [2024-11-20 07:31:03.732638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.308 ms 00:33:39.675 [2024-11-20 07:31:03.732652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.675 [2024-11-20 07:31:03.732696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.675 [2024-11-20 07:31:03.732714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:39.675 [2024-11-20 07:31:03.732726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:39.675 [2024-11-20 07:31:03.732739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.675 [2024-11-20 07:31:03.732852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.675 [2024-11-20 07:31:03.732869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:39.675 [2024-11-20 07:31:03.732880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:33:39.675 [2024-11-20 07:31:03.732893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.675 [2024-11-20 07:31:03.734140] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4658.489 ms, result 0 00:33:39.675 { 00:33:39.675 "name": "ftl0", 00:33:39.675 "uuid": "8dcd8e80-fb40-424b-9cfc-394d2e5936cf" 00:33:39.675 } 00:33:39.675 07:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:33:39.675 07:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:33:39.675 07:31:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:33:39.933 07:31:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:33:40.192 [2024-11-20 07:31:04.138580] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:33:40.192 I/O size of 69632 is greater than zero copy threshold (65536). 00:33:40.192 Zero copy mechanism will not be used. 00:33:40.192 Running I/O for 4 seconds... 00:33:42.061 1887.00 IOPS, 125.31 MiB/s [2024-11-20T07:31:07.199Z] 1869.50 IOPS, 124.15 MiB/s [2024-11-20T07:31:08.574Z] 1936.00 IOPS, 128.56 MiB/s [2024-11-20T07:31:08.574Z] 1951.25 IOPS, 129.58 MiB/s 00:33:44.371 Latency(us) 00:33:44.371 [2024-11-20T07:31:08.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.371 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:33:44.371 ftl0 : 4.00 1950.52 129.53 0.00 0.00 536.92 187.25 2184.53 00:33:44.371 [2024-11-20T07:31:08.574Z] =================================================================================================================== 00:33:44.371 [2024-11-20T07:31:08.574Z] Total : 1950.52 129.53 0.00 0.00 536.92 187.25 2184.53 00:33:44.371 [2024-11-20 07:31:08.150560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:33:44.371 { 00:33:44.371 "results": [ 00:33:44.371 { 00:33:44.371 "job": "ftl0", 00:33:44.371 "core_mask": "0x1", 00:33:44.371 "workload": "randwrite", 00:33:44.371 "status": "finished", 00:33:44.371 "queue_depth": 1, 00:33:44.371 "io_size": 69632, 00:33:44.371 "runtime": 4.002019, 00:33:44.371 "iops": 1950.5154773128263, 00:33:44.371 "mibps": 129.52641841530487, 00:33:44.371 "io_failed": 0, 00:33:44.371 "io_timeout": 0, 00:33:44.371 "avg_latency_us": 536.9211688200774, 00:33:44.371 "min_latency_us": 187.24571428571429, 00:33:44.371 "max_latency_us": 2184.5333333333333 00:33:44.371 } 00:33:44.372 ], 00:33:44.372 "core_count": 1 00:33:44.372 } 00:33:44.372 07:31:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:33:44.372 [2024-11-20 07:31:08.311691] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:33:44.372 Running I/O for 4 seconds... 00:33:46.261 9272.00 IOPS, 36.22 MiB/s [2024-11-20T07:31:11.402Z] 9018.00 IOPS, 35.23 MiB/s [2024-11-20T07:31:12.339Z] 8967.33 IOPS, 35.03 MiB/s [2024-11-20T07:31:12.339Z] 9028.50 IOPS, 35.27 MiB/s 00:33:48.136 Latency(us) 00:33:48.136 [2024-11-20T07:31:12.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.137 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:33:48.137 ftl0 : 4.02 9022.68 35.24 0.00 0.00 14156.59 286.72 29584.82 00:33:48.137 [2024-11-20T07:31:12.340Z] =================================================================================================================== 00:33:48.137 [2024-11-20T07:31:12.340Z] Total : 9022.68 35.24 0.00 0.00 14156.59 0.00 29584.82 00:33:48.397 [2024-11-20 07:31:12.338296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:33:48.397 { 00:33:48.397 "results": [ 00:33:48.397 { 00:33:48.397 "job": "ftl0", 00:33:48.397 "core_mask": "0x1", 00:33:48.397 "workload": "randwrite", 00:33:48.397 "status": "finished", 00:33:48.397 "queue_depth": 128, 00:33:48.397 "io_size": 4096, 00:33:48.397 "runtime": 4.016324, 00:33:48.397 "iops": 9022.678449248617, 00:33:48.397 "mibps": 35.24483769237741, 00:33:48.397 "io_failed": 0, 00:33:48.397 "io_timeout": 0, 00:33:48.397 "avg_latency_us": 14156.594001876483, 00:33:48.397 "min_latency_us": 286.72, 00:33:48.397 "max_latency_us": 29584.822857142855 00:33:48.397 } 00:33:48.397 ], 00:33:48.397 "core_count": 1 00:33:48.397 } 00:33:48.397 07:31:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:33:48.397 [2024-11-20 07:31:12.496105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:33:48.397 Running I/O for 4 seconds... 00:33:50.719 7150.00 IOPS, 27.93 MiB/s [2024-11-20T07:31:15.857Z] 7471.50 IOPS, 29.19 MiB/s [2024-11-20T07:31:16.791Z] 7609.67 IOPS, 29.73 MiB/s [2024-11-20T07:31:16.791Z] 7580.75 IOPS, 29.61 MiB/s 00:33:52.588 Latency(us) 00:33:52.588 [2024-11-20T07:31:16.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.588 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:52.588 Verification LBA range: start 0x0 length 0x1400000 00:33:52.588 ftl0 : 4.01 7591.80 29.66 0.00 0.00 16808.02 253.56 24716.43 00:33:52.588 [2024-11-20T07:31:16.791Z] =================================================================================================================== 00:33:52.588 [2024-11-20T07:31:16.791Z] Total : 7591.80 29.66 0.00 0.00 16808.02 0.00 24716.43 00:33:52.588 { 00:33:52.588 "results": [ 00:33:52.588 { 00:33:52.588 "job": "ftl0", 00:33:52.588 "core_mask": "0x1", 00:33:52.588 "workload": "verify", 00:33:52.588 "status": "finished", 00:33:52.588 "verify_range": { 00:33:52.588 "start": 0, 00:33:52.588 "length": 20971520 00:33:52.588 }, 00:33:52.588 "queue_depth": 128, 00:33:52.588 "io_size": 4096, 00:33:52.588 "runtime": 4.010774, 00:33:52.588 "iops": 7591.801482706331, 00:33:52.588 "mibps": 29.655474541821604, 00:33:52.588 "io_failed": 0, 00:33:52.588 "io_timeout": 0, 00:33:52.588 "avg_latency_us": 16808.019104544837, 00:33:52.588 "min_latency_us": 253.56190476190477, 00:33:52.588 "max_latency_us": 24716.434285714287 00:33:52.588 } 00:33:52.588 ], 00:33:52.588 "core_count": 1 00:33:52.588 } 00:33:52.588 [2024-11-20 07:31:16.528389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:33:52.588 07:31:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:33:52.847 [2024-11-20 07:31:16.850156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.847 [2024-11-20 07:31:16.850225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:52.847 [2024-11-20 07:31:16.850246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:52.847 [2024-11-20 07:31:16.850259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.847 [2024-11-20 07:31:16.850284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:52.847 [2024-11-20 07:31:16.854603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.847 [2024-11-20 07:31:16.854639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:52.847 [2024-11-20 07:31:16.854655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.296 ms 00:33:52.847 [2024-11-20 07:31:16.854667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.847 [2024-11-20 07:31:16.856434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.847 [2024-11-20 07:31:16.856472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:52.847 [2024-11-20 07:31:16.856492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.734 ms 00:33:52.847 [2024-11-20 07:31:16.856503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.847 [2024-11-20 07:31:17.026251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.847 [2024-11-20 07:31:17.026332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:52.847 [2024-11-20 07:31:17.026360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 169.702 ms 00:33:52.847 [2024-11-20 07:31:17.026373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:52.847 [2024-11-20 07:31:17.031833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:52.847 [2024-11-20 07:31:17.031872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:52.847 [2024-11-20 07:31:17.031889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.401 ms 00:33:52.847 [2024-11-20 07:31:17.031901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.107 [2024-11-20 07:31:17.070937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.107 [2024-11-20 07:31:17.071003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:53.107 [2024-11-20 07:31:17.071022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.928 ms 00:33:53.107 [2024-11-20 07:31:17.071033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.107 [2024-11-20 07:31:17.093562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.107 [2024-11-20 07:31:17.093611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:53.107 [2024-11-20 07:31:17.093634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.476 ms 00:33:53.107 [2024-11-20 07:31:17.093645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.107 [2024-11-20 07:31:17.093794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.107 [2024-11-20 07:31:17.093808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:53.107 [2024-11-20 07:31:17.093843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:33:53.107 [2024-11-20 07:31:17.093854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.107 [2024-11-20 07:31:17.131366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.107 [2024-11-20 07:31:17.131411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:53.107 [2024-11-20 07:31:17.131429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.489 ms 00:33:53.107 [2024-11-20 07:31:17.131439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.107 [2024-11-20 07:31:17.169253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.107 [2024-11-20 07:31:17.169299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:53.107 [2024-11-20 07:31:17.169316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.765 ms 00:33:53.107 [2024-11-20 07:31:17.169327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.107 [2024-11-20 07:31:17.205779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.107 [2024-11-20 07:31:17.205835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:53.107 [2024-11-20 07:31:17.205853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.405 ms 00:33:53.107 [2024-11-20 07:31:17.205864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.107 [2024-11-20 07:31:17.243336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.107 [2024-11-20 07:31:17.243381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:53.108 [2024-11-20 07:31:17.243418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.363 ms 00:33:53.108 [2024-11-20 07:31:17.243428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.108 [2024-11-20 07:31:17.243472] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:53.108 [2024-11-20 07:31:17.243489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.243985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:53.108 [2024-11-20 07:31:17.244411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:53.109 [2024-11-20 07:31:17.244761] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:53.109 [2024-11-20 07:31:17.244773] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8dcd8e80-fb40-424b-9cfc-394d2e5936cf 00:33:53.109 [2024-11-20 07:31:17.244784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:53.109 [2024-11-20 07:31:17.244797] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:53.109 [2024-11-20 07:31:17.244810] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:53.109 [2024-11-20 07:31:17.244839] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:53.109 [2024-11-20 07:31:17.244849] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:53.109 [2024-11-20 07:31:17.244862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:53.109 [2024-11-20 07:31:17.244872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:53.109 [2024-11-20 07:31:17.244885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:53.109 [2024-11-20 07:31:17.244895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:53.109 [2024-11-20 07:31:17.244907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.109 [2024-11-20 07:31:17.244917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:53.109 [2024-11-20 07:31:17.244931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:33:53.109 [2024-11-20 07:31:17.244941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.109 [2024-11-20 07:31:17.265364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.109 [2024-11-20 07:31:17.265405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:53.109 [2024-11-20 07:31:17.265422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.346 ms 00:33:53.109 [2024-11-20 07:31:17.265432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.109 [2024-11-20 07:31:17.266020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:53.109 [2024-11-20 07:31:17.266039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:53.109 [2024-11-20 07:31:17.266053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:33:53.109 [2024-11-20 07:31:17.266064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.368 [2024-11-20 07:31:17.323615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.368 [2024-11-20 07:31:17.323679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:53.368 [2024-11-20 07:31:17.323701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.368 [2024-11-20 07:31:17.323712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.368 [2024-11-20 07:31:17.323788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.368 [2024-11-20 07:31:17.323799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:53.368 [2024-11-20 07:31:17.323831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.368 [2024-11-20 07:31:17.323843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.368 [2024-11-20 07:31:17.323972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.323991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:53.369 [2024-11-20 07:31:17.324005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.324016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.324039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.324052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:53.369 [2024-11-20 07:31:17.324065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.324076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.454511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.454584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:53.369 [2024-11-20 07:31:17.454605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.454615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.563128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.563195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:53.369 [2024-11-20 07:31:17.563228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.563239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.563366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.563379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:53.369 [2024-11-20 07:31:17.563397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.563407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.563470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.563482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:53.369 [2024-11-20 07:31:17.563495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.563506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.563637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.563650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:53.369 [2024-11-20 07:31:17.563671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.563681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.563721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.563733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:53.369 [2024-11-20 07:31:17.563746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.563757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.563798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.563832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:53.369 [2024-11-20 07:31:17.563847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.563860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.563909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:53.369 [2024-11-20 07:31:17.563930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:53.369 [2024-11-20 07:31:17.563944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:53.369 [2024-11-20 07:31:17.563954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:53.369 [2024-11-20 07:31:17.564083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 713.878 ms, result 0 00:33:53.628 true 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75879 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75879 ']' 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75879 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75879 00:33:53.628 killing process with pid 75879 00:33:53.628 Received shutdown signal, test time was about 4.000000 seconds 00:33:53.628 00:33:53.628 Latency(us) 00:33:53.628 [2024-11-20T07:31:17.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.628 [2024-11-20T07:31:17.831Z] =================================================================================================================== 00:33:53.628 [2024-11-20T07:31:17.831Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75879' 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75879 00:33:53.628 07:31:17 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75879 00:33:55.005 07:31:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:55.005 07:31:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:33:55.005 Remove shared memory files 00:33:55.005 07:31:19 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:55.005 07:31:19 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:33:55.006 07:31:19 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:33:55.006 07:31:19 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:33:55.006 07:31:19 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:55.006 07:31:19 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:33:55.006 00:33:55.006 real 0m24.933s 00:33:55.006 user 0m28.401s 00:33:55.006 sys 0m1.303s 00:33:55.006 07:31:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:55.006 07:31:19 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:55.006 ************************************ 00:33:55.006 END TEST ftl_bdevperf 00:33:55.006 ************************************ 00:33:55.006 07:31:19 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:33:55.006 07:31:19 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:55.006 07:31:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:55.006 07:31:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:55.006 ************************************ 00:33:55.006 START TEST ftl_trim 00:33:55.006 ************************************ 00:33:55.006 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:33:55.265 * Looking for test storage... 00:33:55.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:55.265 07:31:19 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:55.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.265 --rc genhtml_branch_coverage=1 00:33:55.265 --rc genhtml_function_coverage=1 00:33:55.265 --rc genhtml_legend=1 00:33:55.265 --rc geninfo_all_blocks=1 00:33:55.265 --rc geninfo_unexecuted_blocks=1 00:33:55.265 00:33:55.265 ' 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:55.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.265 --rc genhtml_branch_coverage=1 00:33:55.265 --rc genhtml_function_coverage=1 00:33:55.265 --rc genhtml_legend=1 00:33:55.265 --rc geninfo_all_blocks=1 00:33:55.265 --rc geninfo_unexecuted_blocks=1 00:33:55.265 00:33:55.265 ' 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:55.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.265 --rc genhtml_branch_coverage=1 00:33:55.265 --rc genhtml_function_coverage=1 00:33:55.265 --rc genhtml_legend=1 00:33:55.265 --rc geninfo_all_blocks=1 00:33:55.265 --rc geninfo_unexecuted_blocks=1 00:33:55.265 00:33:55.265 ' 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:55.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.265 --rc genhtml_branch_coverage=1 00:33:55.265 --rc genhtml_function_coverage=1 00:33:55.265 --rc genhtml_legend=1 00:33:55.265 --rc geninfo_all_blocks=1 00:33:55.265 --rc geninfo_unexecuted_blocks=1 00:33:55.265 00:33:55.265 ' 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76247 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76247 00:33:55.265 07:31:19 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76247 ']' 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:55.265 07:31:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:33:55.524 [2024-11-20 07:31:19.466151] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:33:55.524 [2024-11-20 07:31:19.466487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76247 ] 00:33:55.524 [2024-11-20 07:31:19.638725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:55.790 [2024-11-20 07:31:19.766742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.790 [2024-11-20 07:31:19.766784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.790 [2024-11-20 07:31:19.766794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.742 07:31:20 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:56.742 07:31:20 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:33:56.742 07:31:20 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:56.742 07:31:20 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:33:56.742 07:31:20 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:56.742 07:31:20 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:33:56.742 07:31:20 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:33:56.742 07:31:20 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:57.311 07:31:21 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:57.311 { 00:33:57.311 "name": "nvme0n1", 00:33:57.311 "aliases": [ 00:33:57.311 "c5a01f46-7abf-490f-ae21-f41da3e03ef9" 00:33:57.311 ], 00:33:57.311 "product_name": "NVMe disk", 00:33:57.311 "block_size": 4096, 00:33:57.311 "num_blocks": 1310720, 00:33:57.311 "uuid": "c5a01f46-7abf-490f-ae21-f41da3e03ef9", 00:33:57.311 "numa_id": -1, 00:33:57.311 "assigned_rate_limits": { 00:33:57.311 "rw_ios_per_sec": 0, 00:33:57.311 "rw_mbytes_per_sec": 0, 00:33:57.311 "r_mbytes_per_sec": 0, 00:33:57.311 "w_mbytes_per_sec": 0 00:33:57.311 }, 00:33:57.311 "claimed": true, 00:33:57.311 "claim_type": "read_many_write_one", 00:33:57.311 "zoned": false, 00:33:57.311 "supported_io_types": { 00:33:57.311 "read": true, 00:33:57.311 "write": true, 00:33:57.311 "unmap": true, 00:33:57.311 "flush": true, 00:33:57.311 "reset": true, 00:33:57.311 "nvme_admin": true, 00:33:57.311 "nvme_io": true, 00:33:57.311 "nvme_io_md": false, 00:33:57.311 "write_zeroes": true, 00:33:57.311 "zcopy": false, 00:33:57.311 "get_zone_info": false, 00:33:57.311 "zone_management": false, 00:33:57.311 "zone_append": false, 00:33:57.311 "compare": true, 00:33:57.311 "compare_and_write": false, 00:33:57.311 "abort": true, 00:33:57.311 "seek_hole": false, 00:33:57.311 "seek_data": false, 00:33:57.311 "copy": true, 00:33:57.311 "nvme_iov_md": false 00:33:57.311 }, 00:33:57.311 "driver_specific": { 00:33:57.311 "nvme": [ 00:33:57.311 { 00:33:57.311 "pci_address": "0000:00:11.0", 00:33:57.311 "trid": { 00:33:57.311 "trtype": "PCIe", 00:33:57.311 "traddr": "0000:00:11.0" 00:33:57.311 }, 00:33:57.311 "ctrlr_data": { 00:33:57.311 "cntlid": 0, 00:33:57.311 "vendor_id": "0x1b36", 00:33:57.311 "model_number": "QEMU NVMe Ctrl", 00:33:57.311 "serial_number": "12341", 00:33:57.311 "firmware_revision": "8.0.0", 00:33:57.311 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:57.311 "oacs": { 00:33:57.311 "security": 0, 00:33:57.311 "format": 1, 00:33:57.311 "firmware": 0, 00:33:57.311 "ns_manage": 1 00:33:57.311 }, 00:33:57.311 "multi_ctrlr": false, 00:33:57.311 "ana_reporting": false 00:33:57.311 }, 00:33:57.311 "vs": { 00:33:57.311 "nvme_version": "1.4" 00:33:57.311 }, 00:33:57.311 "ns_data": { 00:33:57.311 "id": 1, 00:33:57.311 "can_share": false 00:33:57.311 } 00:33:57.311 } 00:33:57.311 ], 00:33:57.311 "mp_policy": "active_passive" 00:33:57.311 } 00:33:57.311 } 00:33:57.311 ]' 00:33:57.311 07:31:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:57.311 07:31:21 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:33:57.311 07:31:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:57.311 07:31:21 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:57.311 07:31:21 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:57.311 07:31:21 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:33:57.311 07:31:21 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:33:57.311 07:31:21 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:57.311 07:31:21 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:33:57.311 07:31:21 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:57.311 07:31:21 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:57.571 07:31:21 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=02edfb5b-c296-444f-a620-7a57d417e158 00:33:57.571 07:31:21 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:33:57.571 07:31:21 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 02edfb5b-c296-444f-a620-7a57d417e158 00:33:57.830 07:31:21 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:57.830 07:31:22 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a745766a-db3e-44eb-ae08-14f72e5ecd14 00:33:57.830 07:31:22 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a745766a-db3e-44eb-ae08-14f72e5ecd14 00:33:58.089 07:31:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.089 07:31:22 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.089 07:31:22 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:33:58.089 07:31:22 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:58.089 07:31:22 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.089 07:31:22 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:33:58.089 07:31:22 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.089 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.089 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:58.089 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:33:58.089 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:33:58.089 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.348 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:58.348 { 00:33:58.348 "name": "61ab7569-a49e-405c-bc33-e86fa912954b", 00:33:58.348 "aliases": [ 00:33:58.348 "lvs/nvme0n1p0" 00:33:58.348 ], 00:33:58.348 "product_name": "Logical Volume", 00:33:58.348 "block_size": 4096, 00:33:58.348 "num_blocks": 26476544, 00:33:58.348 "uuid": "61ab7569-a49e-405c-bc33-e86fa912954b", 00:33:58.348 "assigned_rate_limits": { 00:33:58.348 "rw_ios_per_sec": 0, 00:33:58.348 "rw_mbytes_per_sec": 0, 00:33:58.348 "r_mbytes_per_sec": 0, 00:33:58.348 "w_mbytes_per_sec": 0 00:33:58.348 }, 00:33:58.348 "claimed": false, 00:33:58.348 "zoned": false, 00:33:58.348 "supported_io_types": { 00:33:58.348 "read": true, 00:33:58.348 "write": true, 00:33:58.348 "unmap": true, 00:33:58.348 "flush": false, 00:33:58.348 "reset": true, 00:33:58.348 "nvme_admin": false, 00:33:58.348 "nvme_io": false, 00:33:58.348 "nvme_io_md": false, 00:33:58.348 "write_zeroes": true, 00:33:58.348 "zcopy": false, 00:33:58.348 "get_zone_info": false, 00:33:58.348 "zone_management": false, 00:33:58.348 "zone_append": false, 00:33:58.348 "compare": false, 00:33:58.348 "compare_and_write": false, 00:33:58.348 "abort": false, 00:33:58.348 "seek_hole": true, 00:33:58.348 "seek_data": true, 00:33:58.348 "copy": false, 00:33:58.348 "nvme_iov_md": false 00:33:58.348 }, 00:33:58.348 "driver_specific": { 00:33:58.348 "lvol": { 00:33:58.348 "lvol_store_uuid": "a745766a-db3e-44eb-ae08-14f72e5ecd14", 00:33:58.348 "base_bdev": "nvme0n1", 00:33:58.348 "thin_provision": true, 00:33:58.348 "num_allocated_clusters": 0, 00:33:58.348 "snapshot": false, 00:33:58.349 "clone": false, 00:33:58.349 "esnap_clone": false 00:33:58.349 } 00:33:58.349 } 00:33:58.349 } 00:33:58.349 ]' 00:33:58.349 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:58.349 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:33:58.349 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:58.349 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:58.349 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:58.349 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:33:58.349 07:31:22 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:33:58.349 07:31:22 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:33:58.349 07:31:22 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:58.608 07:31:22 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:58.608 07:31:22 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:58.608 07:31:22 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.608 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.608 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:58.608 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:33:58.608 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:33:58.608 07:31:22 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61ab7569-a49e-405c-bc33-e86fa912954b 00:33:58.867 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:58.867 { 00:33:58.867 "name": "61ab7569-a49e-405c-bc33-e86fa912954b", 00:33:58.867 "aliases": [ 00:33:58.867 "lvs/nvme0n1p0" 00:33:58.867 ], 00:33:58.867 "product_name": "Logical Volume", 00:33:58.867 "block_size": 4096, 00:33:58.867 "num_blocks": 26476544, 00:33:58.867 "uuid": "61ab7569-a49e-405c-bc33-e86fa912954b", 00:33:58.867 "assigned_rate_limits": { 00:33:58.867 "rw_ios_per_sec": 0, 00:33:58.867 "rw_mbytes_per_sec": 0, 00:33:58.867 "r_mbytes_per_sec": 0, 00:33:58.867 "w_mbytes_per_sec": 0 00:33:58.867 }, 00:33:58.867 "claimed": false, 00:33:58.867 "zoned": false, 00:33:58.867 "supported_io_types": { 00:33:58.867 "read": true, 00:33:58.867 "write": true, 00:33:58.867 "unmap": true, 00:33:58.867 "flush": false, 00:33:58.867 "reset": true, 00:33:58.867 "nvme_admin": false, 00:33:58.867 "nvme_io": false, 00:33:58.867 "nvme_io_md": false, 00:33:58.867 "write_zeroes": true, 00:33:58.867 "zcopy": false, 00:33:58.867 "get_zone_info": false, 00:33:58.867 "zone_management": false, 00:33:58.867 "zone_append": false, 00:33:58.867 "compare": false, 00:33:58.867 "compare_and_write": false, 00:33:58.867 "abort": false, 00:33:58.867 "seek_hole": true, 00:33:58.867 "seek_data": true, 00:33:58.867 "copy": false, 00:33:58.867 "nvme_iov_md": false 00:33:58.867 }, 00:33:58.867 "driver_specific": { 00:33:58.867 "lvol": { 00:33:58.867 "lvol_store_uuid": "a745766a-db3e-44eb-ae08-14f72e5ecd14", 00:33:58.867 "base_bdev": "nvme0n1", 00:33:58.867 "thin_provision": true, 00:33:58.867 "num_allocated_clusters": 0, 00:33:58.867 "snapshot": false, 00:33:58.867 "clone": false, 00:33:58.867 "esnap_clone": false 00:33:58.867 } 00:33:58.867 } 00:33:58.867 } 00:33:58.867 ]' 00:33:59.126 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:59.126 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:33:59.126 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:59.126 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:59.126 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:59.126 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:33:59.126 07:31:23 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:33:59.126 07:31:23 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:59.385 07:31:23 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:33:59.385 07:31:23 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:33:59.385 07:31:23 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 61ab7569-a49e-405c-bc33-e86fa912954b 00:33:59.385 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=61ab7569-a49e-405c-bc33-e86fa912954b 00:33:59.385 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:59.385 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:33:59.385 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:33:59.385 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61ab7569-a49e-405c-bc33-e86fa912954b 00:33:59.644 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:59.644 { 00:33:59.644 "name": "61ab7569-a49e-405c-bc33-e86fa912954b", 00:33:59.644 "aliases": [ 00:33:59.644 "lvs/nvme0n1p0" 00:33:59.644 ], 00:33:59.644 "product_name": "Logical Volume", 00:33:59.644 "block_size": 4096, 00:33:59.644 "num_blocks": 26476544, 00:33:59.644 "uuid": "61ab7569-a49e-405c-bc33-e86fa912954b", 00:33:59.644 "assigned_rate_limits": { 00:33:59.644 "rw_ios_per_sec": 0, 00:33:59.644 "rw_mbytes_per_sec": 0, 00:33:59.644 "r_mbytes_per_sec": 0, 00:33:59.644 "w_mbytes_per_sec": 0 00:33:59.644 }, 00:33:59.644 "claimed": false, 00:33:59.644 "zoned": false, 00:33:59.644 "supported_io_types": { 00:33:59.644 "read": true, 00:33:59.644 "write": true, 00:33:59.644 "unmap": true, 00:33:59.644 "flush": false, 00:33:59.644 "reset": true, 00:33:59.644 "nvme_admin": false, 00:33:59.644 "nvme_io": false, 00:33:59.644 "nvme_io_md": false, 00:33:59.644 "write_zeroes": true, 00:33:59.644 "zcopy": false, 00:33:59.644 "get_zone_info": false, 00:33:59.644 "zone_management": false, 00:33:59.644 "zone_append": false, 00:33:59.644 "compare": false, 00:33:59.644 "compare_and_write": false, 00:33:59.644 "abort": false, 00:33:59.644 "seek_hole": true, 00:33:59.644 "seek_data": true, 00:33:59.644 "copy": false, 00:33:59.644 "nvme_iov_md": false 00:33:59.644 }, 00:33:59.644 "driver_specific": { 00:33:59.644 "lvol": { 00:33:59.644 "lvol_store_uuid": "a745766a-db3e-44eb-ae08-14f72e5ecd14", 00:33:59.644 "base_bdev": "nvme0n1", 00:33:59.644 "thin_provision": true, 00:33:59.644 "num_allocated_clusters": 0, 00:33:59.644 "snapshot": false, 00:33:59.644 "clone": false, 00:33:59.644 "esnap_clone": false 00:33:59.644 } 00:33:59.644 } 00:33:59.644 } 00:33:59.644 ]' 00:33:59.644 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:59.644 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:33:59.644 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:59.644 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:59.644 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:59.644 07:31:23 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:33:59.644 07:31:23 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:33:59.644 07:31:23 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 61ab7569-a49e-405c-bc33-e86fa912954b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:33:59.904 [2024-11-20 07:31:23.963064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.963133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:59.904 [2024-11-20 07:31:23.963159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:59.904 [2024-11-20 07:31:23.963171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.967213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.967256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:59.904 [2024-11-20 07:31:23.967273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.987 ms 00:33:59.904 [2024-11-20 07:31:23.967285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.967436] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:59.904 [2024-11-20 07:31:23.968415] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:59.904 [2024-11-20 07:31:23.968455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.968467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:59.904 [2024-11-20 07:31:23.968495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:33:59.904 [2024-11-20 07:31:23.968505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.968639] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2e94c8c0-f203-44d7-914b-d7ad4a7525b4 00:33:59.904 [2024-11-20 07:31:23.971217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.971255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:59.904 [2024-11-20 07:31:23.971269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:59.904 [2024-11-20 07:31:23.971284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.986163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.986419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:59.904 [2024-11-20 07:31:23.986449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.696 ms 00:33:59.904 [2024-11-20 07:31:23.986467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.986712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.986732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:59.904 [2024-11-20 07:31:23.986744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:33:59.904 [2024-11-20 07:31:23.986765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.986859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.986878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:59.904 [2024-11-20 07:31:23.986891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:59.904 [2024-11-20 07:31:23.986905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.986965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:33:59.904 [2024-11-20 07:31:23.993563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.993728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:59.904 [2024-11-20 07:31:23.993777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.605 ms 00:33:59.904 [2024-11-20 07:31:23.993790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.993900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.904 [2024-11-20 07:31:23.993925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:59.904 [2024-11-20 07:31:23.993942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:59.904 [2024-11-20 07:31:23.993974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.904 [2024-11-20 07:31:23.994039] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:59.904 [2024-11-20 07:31:23.994196] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:59.904 [2024-11-20 07:31:23.994222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:59.904 [2024-11-20 07:31:23.994239] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:59.904 [2024-11-20 07:31:23.994258] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:59.904 [2024-11-20 07:31:23.994273] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:59.904 [2024-11-20 07:31:23.994294] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:33:59.904 [2024-11-20 07:31:23.994306] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:59.904 [2024-11-20 07:31:23.994321] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:59.904 [2024-11-20 07:31:23.994347] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:59.904 [2024-11-20 07:31:23.994364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.905 [2024-11-20 07:31:23.994376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:59.905 [2024-11-20 07:31:23.994392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:33:59.905 [2024-11-20 07:31:23.994404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.905 [2024-11-20 07:31:23.994527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.905 [2024-11-20 07:31:23.994540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:59.905 [2024-11-20 07:31:23.994556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:59.905 [2024-11-20 07:31:23.994568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.905 [2024-11-20 07:31:23.994788] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:59.905 [2024-11-20 07:31:23.994804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:59.905 [2024-11-20 07:31:23.994841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:59.905 [2024-11-20 07:31:23.994854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.905 [2024-11-20 07:31:23.994869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:59.905 [2024-11-20 07:31:23.994880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:59.905 [2024-11-20 07:31:23.994895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:33:59.905 [2024-11-20 07:31:23.994907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:59.905 [2024-11-20 07:31:23.994922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:33:59.905 [2024-11-20 07:31:23.994933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:59.905 [2024-11-20 07:31:23.994947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:59.905 [2024-11-20 07:31:23.994958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:33:59.905 [2024-11-20 07:31:23.994971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:59.905 [2024-11-20 07:31:23.994983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:59.905 [2024-11-20 07:31:23.994997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:33:59.905 [2024-11-20 07:31:23.995008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:59.905 [2024-11-20 07:31:23.995036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:33:59.905 [2024-11-20 07:31:23.995063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:59.905 [2024-11-20 07:31:23.995087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:59.905 [2024-11-20 07:31:23.995112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:59.905 [2024-11-20 07:31:23.995122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:59.905 [2024-11-20 07:31:23.995144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:59.905 [2024-11-20 07:31:23.995157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:59.905 [2024-11-20 07:31:23.995179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:59.905 [2024-11-20 07:31:23.995189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:59.905 [2024-11-20 07:31:23.995213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:59.905 [2024-11-20 07:31:23.995230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:59.905 [2024-11-20 07:31:23.995253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:59.905 [2024-11-20 07:31:23.995263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:33:59.905 [2024-11-20 07:31:23.995275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:59.905 [2024-11-20 07:31:23.995285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:59.905 [2024-11-20 07:31:23.995298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:33:59.905 [2024-11-20 07:31:23.995308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:59.905 [2024-11-20 07:31:23.995330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:33:59.905 [2024-11-20 07:31:23.995342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995351] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:59.905 [2024-11-20 07:31:23.995366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:59.905 [2024-11-20 07:31:23.995376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:59.905 [2024-11-20 07:31:23.995391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:59.905 [2024-11-20 07:31:23.995402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:59.905 [2024-11-20 07:31:23.995420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:59.905 [2024-11-20 07:31:23.995430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:59.905 [2024-11-20 07:31:23.995443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:59.905 [2024-11-20 07:31:23.995453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:59.905 [2024-11-20 07:31:23.995466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:59.905 [2024-11-20 07:31:23.995482] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:59.905 [2024-11-20 07:31:23.995499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:59.905 [2024-11-20 07:31:23.995512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:33:59.905 [2024-11-20 07:31:23.995527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:33:59.905 [2024-11-20 07:31:23.995538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:33:59.905 [2024-11-20 07:31:23.995553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:33:59.905 [2024-11-20 07:31:23.995564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:33:59.905 [2024-11-20 07:31:23.995578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:33:59.905 [2024-11-20 07:31:23.995589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:33:59.905 [2024-11-20 07:31:23.995602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:33:59.905 [2024-11-20 07:31:23.995614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:33:59.905 [2024-11-20 07:31:23.995633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:33:59.905 [2024-11-20 07:31:23.995644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:33:59.905 [2024-11-20 07:31:23.995658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:33:59.905 [2024-11-20 07:31:23.995668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:33:59.905 [2024-11-20 07:31:23.995682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:33:59.905 [2024-11-20 07:31:23.995694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:59.905 [2024-11-20 07:31:23.995719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:59.905 [2024-11-20 07:31:23.995731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:59.905 [2024-11-20 07:31:23.995745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:59.905 [2024-11-20 07:31:23.995756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:59.905 [2024-11-20 07:31:23.995770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:59.905 [2024-11-20 07:31:23.995782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:59.905 [2024-11-20 07:31:23.995797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:59.905 [2024-11-20 07:31:23.995807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:33:59.905 [2024-11-20 07:31:23.995821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:59.905 [2024-11-20 07:31:23.995976] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:59.905 [2024-11-20 07:31:23.995997] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:34:02.442 [2024-11-20 07:31:26.546238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.442 [2024-11-20 07:31:26.546309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:34:02.442 [2024-11-20 07:31:26.546344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2550.237 ms 00:34:02.442 [2024-11-20 07:31:26.546359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.442 [2024-11-20 07:31:26.585431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.442 [2024-11-20 07:31:26.585487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:02.442 [2024-11-20 07:31:26.585521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.677 ms 00:34:02.442 [2024-11-20 07:31:26.585535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.442 [2024-11-20 07:31:26.585698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.442 [2024-11-20 07:31:26.585714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:02.442 [2024-11-20 07:31:26.585726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:34:02.442 [2024-11-20 07:31:26.585743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.442 [2024-11-20 07:31:26.642380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.442 [2024-11-20 07:31:26.642630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:02.442 [2024-11-20 07:31:26.642655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.570 ms 00:34:02.702 [2024-11-20 07:31:26.642669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.642790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.642807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:02.702 [2024-11-20 07:31:26.642837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:02.702 [2024-11-20 07:31:26.642851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.643292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.643316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:02.702 [2024-11-20 07:31:26.643327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:34:02.702 [2024-11-20 07:31:26.643340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.643455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.643469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:02.702 [2024-11-20 07:31:26.643480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:34:02.702 [2024-11-20 07:31:26.643503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.665082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.665266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:02.702 [2024-11-20 07:31:26.665291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.513 ms 00:34:02.702 [2024-11-20 07:31:26.665305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.678170] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:34:02.702 [2024-11-20 07:31:26.694783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.694849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:02.702 [2024-11-20 07:31:26.694885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.337 ms 00:34:02.702 [2024-11-20 07:31:26.694896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.774964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.775023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:34:02.702 [2024-11-20 07:31:26.775043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.942 ms 00:34:02.702 [2024-11-20 07:31:26.775055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.775292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.775306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:02.702 [2024-11-20 07:31:26.775324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:34:02.702 [2024-11-20 07:31:26.775334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.812596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.812638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:34:02.702 [2024-11-20 07:31:26.812656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.223 ms 00:34:02.702 [2024-11-20 07:31:26.812683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.850309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.850362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:34:02.702 [2024-11-20 07:31:26.850382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.531 ms 00:34:02.702 [2024-11-20 07:31:26.850392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.702 [2024-11-20 07:31:26.851230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.702 [2024-11-20 07:31:26.851263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:02.702 [2024-11-20 07:31:26.851280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:34:02.702 [2024-11-20 07:31:26.851290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.960 [2024-11-20 07:31:26.954440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.960 [2024-11-20 07:31:26.954511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:34:02.960 [2024-11-20 07:31:26.954540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.105 ms 00:34:02.960 [2024-11-20 07:31:26.954552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.960 [2024-11-20 07:31:26.994160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.960 [2024-11-20 07:31:26.994211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:34:02.960 [2024-11-20 07:31:26.994246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.456 ms 00:34:02.960 [2024-11-20 07:31:26.994257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.960 [2024-11-20 07:31:27.032348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.960 [2024-11-20 07:31:27.032392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:34:02.960 [2024-11-20 07:31:27.032427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.993 ms 00:34:02.960 [2024-11-20 07:31:27.032438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.960 [2024-11-20 07:31:27.070156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.960 [2024-11-20 07:31:27.070196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:02.960 [2024-11-20 07:31:27.070214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.624 ms 00:34:02.960 [2024-11-20 07:31:27.070241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.960 [2024-11-20 07:31:27.070334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.960 [2024-11-20 07:31:27.070350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:02.960 [2024-11-20 07:31:27.070367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:02.960 [2024-11-20 07:31:27.070378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.960 [2024-11-20 07:31:27.070470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:02.960 [2024-11-20 07:31:27.070482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:02.960 [2024-11-20 07:31:27.070495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:02.960 [2024-11-20 07:31:27.070505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:02.960 [2024-11-20 07:31:27.071571] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:02.960 [2024-11-20 07:31:27.076207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3108.195 ms, result 0 00:34:02.960 [2024-11-20 07:31:27.077264] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:02.960 { 00:34:02.960 "name": "ftl0", 00:34:02.960 "uuid": "2e94c8c0-f203-44d7-914b-d7ad4a7525b4" 00:34:02.960 } 00:34:02.960 07:31:27 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:34:02.960 07:31:27 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:34:02.960 07:31:27 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:02.960 07:31:27 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:34:02.960 07:31:27 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:02.960 07:31:27 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:02.960 07:31:27 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:03.219 07:31:27 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:34:03.478 [ 00:34:03.478 { 00:34:03.478 "name": "ftl0", 00:34:03.478 "aliases": [ 00:34:03.478 "2e94c8c0-f203-44d7-914b-d7ad4a7525b4" 00:34:03.478 ], 00:34:03.478 "product_name": "FTL disk", 00:34:03.478 "block_size": 4096, 00:34:03.478 "num_blocks": 23592960, 00:34:03.478 "uuid": "2e94c8c0-f203-44d7-914b-d7ad4a7525b4", 00:34:03.478 "assigned_rate_limits": { 00:34:03.478 "rw_ios_per_sec": 0, 00:34:03.478 "rw_mbytes_per_sec": 0, 00:34:03.478 "r_mbytes_per_sec": 0, 00:34:03.478 "w_mbytes_per_sec": 0 00:34:03.478 }, 00:34:03.478 "claimed": false, 00:34:03.478 "zoned": false, 00:34:03.478 "supported_io_types": { 00:34:03.478 "read": true, 00:34:03.478 "write": true, 00:34:03.478 "unmap": true, 00:34:03.478 "flush": true, 00:34:03.478 "reset": false, 00:34:03.478 "nvme_admin": false, 00:34:03.478 "nvme_io": false, 00:34:03.478 "nvme_io_md": false, 00:34:03.478 "write_zeroes": true, 00:34:03.478 "zcopy": false, 00:34:03.478 "get_zone_info": false, 00:34:03.478 "zone_management": false, 00:34:03.478 "zone_append": false, 00:34:03.478 "compare": false, 00:34:03.478 "compare_and_write": false, 00:34:03.478 "abort": false, 00:34:03.478 "seek_hole": false, 00:34:03.478 "seek_data": false, 00:34:03.478 "copy": false, 00:34:03.478 "nvme_iov_md": false 00:34:03.478 }, 00:34:03.478 "driver_specific": { 00:34:03.478 "ftl": { 00:34:03.478 "base_bdev": "61ab7569-a49e-405c-bc33-e86fa912954b", 00:34:03.478 "cache": "nvc0n1p0" 00:34:03.478 } 00:34:03.478 } 00:34:03.478 } 00:34:03.478 ] 00:34:03.478 07:31:27 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:34:03.478 07:31:27 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:34:03.478 07:31:27 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:34:03.755 07:31:27 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:34:03.755 07:31:27 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:34:04.052 07:31:27 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:34:04.052 { 00:34:04.052 "name": "ftl0", 00:34:04.052 "aliases": [ 00:34:04.052 "2e94c8c0-f203-44d7-914b-d7ad4a7525b4" 00:34:04.052 ], 00:34:04.052 "product_name": "FTL disk", 00:34:04.052 "block_size": 4096, 00:34:04.052 "num_blocks": 23592960, 00:34:04.052 "uuid": "2e94c8c0-f203-44d7-914b-d7ad4a7525b4", 00:34:04.052 "assigned_rate_limits": { 00:34:04.052 "rw_ios_per_sec": 0, 00:34:04.052 "rw_mbytes_per_sec": 0, 00:34:04.052 "r_mbytes_per_sec": 0, 00:34:04.052 "w_mbytes_per_sec": 0 00:34:04.052 }, 00:34:04.052 "claimed": false, 00:34:04.052 "zoned": false, 00:34:04.052 "supported_io_types": { 00:34:04.052 "read": true, 00:34:04.052 "write": true, 00:34:04.052 "unmap": true, 00:34:04.052 "flush": true, 00:34:04.052 "reset": false, 00:34:04.052 "nvme_admin": false, 00:34:04.052 "nvme_io": false, 00:34:04.052 "nvme_io_md": false, 00:34:04.052 "write_zeroes": true, 00:34:04.052 "zcopy": false, 00:34:04.052 "get_zone_info": false, 00:34:04.052 "zone_management": false, 00:34:04.052 "zone_append": false, 00:34:04.052 "compare": false, 00:34:04.052 "compare_and_write": false, 00:34:04.052 "abort": false, 00:34:04.052 "seek_hole": false, 00:34:04.052 "seek_data": false, 00:34:04.052 "copy": false, 00:34:04.052 "nvme_iov_md": false 00:34:04.052 }, 00:34:04.052 "driver_specific": { 00:34:04.052 "ftl": { 00:34:04.052 "base_bdev": "61ab7569-a49e-405c-bc33-e86fa912954b", 00:34:04.052 "cache": "nvc0n1p0" 00:34:04.052 } 00:34:04.052 } 00:34:04.052 } 00:34:04.052 ]' 00:34:04.052 07:31:27 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:34:04.052 07:31:28 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:34:04.052 07:31:28 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:34:04.313 [2024-11-20 07:31:28.270827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.270888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:04.313 [2024-11-20 07:31:28.270910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:04.313 [2024-11-20 07:31:28.270927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.270965] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:34:04.313 [2024-11-20 07:31:28.275140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.275177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:04.313 [2024-11-20 07:31:28.275199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.150 ms 00:34:04.313 [2024-11-20 07:31:28.275211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.275749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.275766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:04.313 [2024-11-20 07:31:28.275780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:34:04.313 [2024-11-20 07:31:28.275791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.278719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.278752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:04.313 [2024-11-20 07:31:28.278767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:34:04.313 [2024-11-20 07:31:28.278778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.284794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.284966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:04.313 [2024-11-20 07:31:28.284993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.976 ms 00:34:04.313 [2024-11-20 07:31:28.285004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.323492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.323533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:04.313 [2024-11-20 07:31:28.323554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.396 ms 00:34:04.313 [2024-11-20 07:31:28.323565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.346476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.346516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:04.313 [2024-11-20 07:31:28.346535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.817 ms 00:34:04.313 [2024-11-20 07:31:28.346549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.346769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.346784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:04.313 [2024-11-20 07:31:28.346798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:34:04.313 [2024-11-20 07:31:28.346808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.383803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.383847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:04.313 [2024-11-20 07:31:28.383866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.937 ms 00:34:04.313 [2024-11-20 07:31:28.383876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.422105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.422351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:04.313 [2024-11-20 07:31:28.422387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.129 ms 00:34:04.313 [2024-11-20 07:31:28.422399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.459379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.459427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:04.313 [2024-11-20 07:31:28.459446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.872 ms 00:34:04.313 [2024-11-20 07:31:28.459457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.496598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.313 [2024-11-20 07:31:28.496648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:04.313 [2024-11-20 07:31:28.496668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.992 ms 00:34:04.313 [2024-11-20 07:31:28.496679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.313 [2024-11-20 07:31:28.496777] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:04.313 [2024-11-20 07:31:28.496797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.496991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:04.313 [2024-11-20 07:31:28.497430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.497983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:04.314 [2024-11-20 07:31:28.498198] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:04.314 [2024-11-20 07:31:28.498214] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e94c8c0-f203-44d7-914b-d7ad4a7525b4 00:34:04.314 [2024-11-20 07:31:28.498226] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:04.314 [2024-11-20 07:31:28.498239] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:04.314 [2024-11-20 07:31:28.498249] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:04.314 [2024-11-20 07:31:28.498264] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:04.314 [2024-11-20 07:31:28.498283] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:04.314 [2024-11-20 07:31:28.498301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:04.314 [2024-11-20 07:31:28.498318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:04.314 [2024-11-20 07:31:28.498336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:04.314 [2024-11-20 07:31:28.498348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:04.314 [2024-11-20 07:31:28.498368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.314 [2024-11-20 07:31:28.498388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:04.314 [2024-11-20 07:31:28.498405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.591 ms 00:34:04.314 [2024-11-20 07:31:28.498416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.573 [2024-11-20 07:31:28.519084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.573 [2024-11-20 07:31:28.519126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:04.574 [2024-11-20 07:31:28.519149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.627 ms 00:34:04.574 [2024-11-20 07:31:28.519160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.574 [2024-11-20 07:31:28.519741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.574 [2024-11-20 07:31:28.519757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:04.574 [2024-11-20 07:31:28.519771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:34:04.574 [2024-11-20 07:31:28.519787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.574 [2024-11-20 07:31:28.592112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.574 [2024-11-20 07:31:28.592339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:04.574 [2024-11-20 07:31:28.592370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.574 [2024-11-20 07:31:28.592382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.574 [2024-11-20 07:31:28.592554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.574 [2024-11-20 07:31:28.592571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:04.574 [2024-11-20 07:31:28.592585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.574 [2024-11-20 07:31:28.592597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.574 [2024-11-20 07:31:28.592681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.574 [2024-11-20 07:31:28.592695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:04.574 [2024-11-20 07:31:28.592717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.574 [2024-11-20 07:31:28.592729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.574 [2024-11-20 07:31:28.592765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.574 [2024-11-20 07:31:28.592777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:04.574 [2024-11-20 07:31:28.592790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.574 [2024-11-20 07:31:28.592802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.574 [2024-11-20 07:31:28.729980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.574 [2024-11-20 07:31:28.730036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:04.574 [2024-11-20 07:31:28.730055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.574 [2024-11-20 07:31:28.730066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.833 [2024-11-20 07:31:28.839660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.833 [2024-11-20 07:31:28.839714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:04.833 [2024-11-20 07:31:28.839733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.833 [2024-11-20 07:31:28.839744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.833 [2024-11-20 07:31:28.839895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.833 [2024-11-20 07:31:28.839909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:04.833 [2024-11-20 07:31:28.839955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.833 [2024-11-20 07:31:28.839970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.833 [2024-11-20 07:31:28.840046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.833 [2024-11-20 07:31:28.840058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:04.833 [2024-11-20 07:31:28.840073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.833 [2024-11-20 07:31:28.840084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.833 [2024-11-20 07:31:28.840267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.833 [2024-11-20 07:31:28.840297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:04.833 [2024-11-20 07:31:28.840318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.833 [2024-11-20 07:31:28.840337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.833 [2024-11-20 07:31:28.840413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.833 [2024-11-20 07:31:28.840438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:04.833 [2024-11-20 07:31:28.840453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.833 [2024-11-20 07:31:28.840464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.833 [2024-11-20 07:31:28.840528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.833 [2024-11-20 07:31:28.840540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:04.833 [2024-11-20 07:31:28.840557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.833 [2024-11-20 07:31:28.840569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.833 [2024-11-20 07:31:28.840646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.833 [2024-11-20 07:31:28.840665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:04.833 [2024-11-20 07:31:28.840679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.833 [2024-11-20 07:31:28.840690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.834 [2024-11-20 07:31:28.840958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 570.081 ms, result 0 00:34:04.834 true 00:34:04.834 07:31:28 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76247 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76247 ']' 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76247 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76247 00:34:04.834 killing process with pid 76247 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76247' 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76247 00:34:04.834 07:31:28 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76247 00:34:10.106 07:31:34 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:34:11.478 65536+0 records in 00:34:11.479 65536+0 records out 00:34:11.479 268435456 bytes (268 MB, 256 MiB) copied, 1.33269 s, 201 MB/s 00:34:11.479 07:31:35 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:11.479 [2024-11-20 07:31:35.624548] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:11.479 [2024-11-20 07:31:35.624708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76458 ] 00:34:11.789 [2024-11-20 07:31:35.816990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.789 [2024-11-20 07:31:35.972475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.356 [2024-11-20 07:31:36.326530] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:12.356 [2024-11-20 07:31:36.326596] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:12.356 [2024-11-20 07:31:36.490902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.490961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:12.356 [2024-11-20 07:31:36.490978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:12.356 [2024-11-20 07:31:36.490989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.494177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.494352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:12.356 [2024-11-20 07:31:36.494374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:34:12.356 [2024-11-20 07:31:36.494385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.494517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:12.356 [2024-11-20 07:31:36.495467] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:12.356 [2024-11-20 07:31:36.495501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.495513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:12.356 [2024-11-20 07:31:36.495524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:34:12.356 [2024-11-20 07:31:36.495545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.497094] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:12.356 [2024-11-20 07:31:36.516276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.516449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:12.356 [2024-11-20 07:31:36.516471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.182 ms 00:34:12.356 [2024-11-20 07:31:36.516483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.516587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.516602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:12.356 [2024-11-20 07:31:36.516613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:34:12.356 [2024-11-20 07:31:36.516624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.523421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.523591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:12.356 [2024-11-20 07:31:36.523613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.753 ms 00:34:12.356 [2024-11-20 07:31:36.523623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.523736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.523751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:12.356 [2024-11-20 07:31:36.523763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:34:12.356 [2024-11-20 07:31:36.523773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.523805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.523844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:12.356 [2024-11-20 07:31:36.523856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:12.356 [2024-11-20 07:31:36.523866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.523892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:34:12.356 [2024-11-20 07:31:36.528751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.528784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:12.356 [2024-11-20 07:31:36.528797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.867 ms 00:34:12.356 [2024-11-20 07:31:36.528807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.528890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.356 [2024-11-20 07:31:36.528904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:12.356 [2024-11-20 07:31:36.528915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:12.356 [2024-11-20 07:31:36.528925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.356 [2024-11-20 07:31:36.528949] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:12.356 [2024-11-20 07:31:36.528981] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:12.356 [2024-11-20 07:31:36.529019] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:12.356 [2024-11-20 07:31:36.529037] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:12.357 [2024-11-20 07:31:36.529130] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:12.357 [2024-11-20 07:31:36.529144] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:12.357 [2024-11-20 07:31:36.529157] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:12.357 [2024-11-20 07:31:36.529170] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529185] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:34:12.357 [2024-11-20 07:31:36.529207] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:12.357 [2024-11-20 07:31:36.529216] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:12.357 [2024-11-20 07:31:36.529226] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:12.357 [2024-11-20 07:31:36.529236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.357 [2024-11-20 07:31:36.529246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:12.357 [2024-11-20 07:31:36.529256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:34:12.357 [2024-11-20 07:31:36.529267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.357 [2024-11-20 07:31:36.529344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.357 [2024-11-20 07:31:36.529357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:12.357 [2024-11-20 07:31:36.529376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:12.357 [2024-11-20 07:31:36.529386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.357 [2024-11-20 07:31:36.529484] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:12.357 [2024-11-20 07:31:36.529502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:12.357 [2024-11-20 07:31:36.529513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:12.357 [2024-11-20 07:31:36.529544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:12.357 [2024-11-20 07:31:36.529573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:12.357 [2024-11-20 07:31:36.529592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:12.357 [2024-11-20 07:31:36.529601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:34:12.357 [2024-11-20 07:31:36.529611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:12.357 [2024-11-20 07:31:36.529634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:12.357 [2024-11-20 07:31:36.529644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:34:12.357 [2024-11-20 07:31:36.529654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:12.357 [2024-11-20 07:31:36.529675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:12.357 [2024-11-20 07:31:36.529704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:12.357 [2024-11-20 07:31:36.529733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:12.357 [2024-11-20 07:31:36.529762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:12.357 [2024-11-20 07:31:36.529790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:12.357 [2024-11-20 07:31:36.529831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:12.357 [2024-11-20 07:31:36.529850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:12.357 [2024-11-20 07:31:36.529859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:34:12.357 [2024-11-20 07:31:36.529869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:12.357 [2024-11-20 07:31:36.529879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:12.357 [2024-11-20 07:31:36.529889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:34:12.357 [2024-11-20 07:31:36.529898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:12.357 [2024-11-20 07:31:36.529917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:34:12.357 [2024-11-20 07:31:36.529926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529935] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:12.357 [2024-11-20 07:31:36.529945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:12.357 [2024-11-20 07:31:36.529954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:12.357 [2024-11-20 07:31:36.529980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.357 [2024-11-20 07:31:36.529991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:12.357 [2024-11-20 07:31:36.530000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:12.357 [2024-11-20 07:31:36.530011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:12.357 [2024-11-20 07:31:36.530021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:12.357 [2024-11-20 07:31:36.530030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:12.357 [2024-11-20 07:31:36.530040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:12.357 [2024-11-20 07:31:36.530051] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:12.357 [2024-11-20 07:31:36.530063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:12.357 [2024-11-20 07:31:36.530075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:34:12.357 [2024-11-20 07:31:36.530086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:34:12.357 [2024-11-20 07:31:36.530097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:34:12.357 [2024-11-20 07:31:36.530108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:34:12.357 [2024-11-20 07:31:36.530119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:34:12.357 [2024-11-20 07:31:36.530129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:34:12.357 [2024-11-20 07:31:36.530139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:34:12.357 [2024-11-20 07:31:36.530150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:34:12.357 [2024-11-20 07:31:36.530160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:34:12.358 [2024-11-20 07:31:36.530171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:34:12.358 [2024-11-20 07:31:36.530181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:34:12.358 [2024-11-20 07:31:36.530191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:34:12.358 [2024-11-20 07:31:36.530202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:34:12.358 [2024-11-20 07:31:36.530213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:34:12.358 [2024-11-20 07:31:36.530223] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:12.358 [2024-11-20 07:31:36.530235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:12.358 [2024-11-20 07:31:36.530246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:12.358 [2024-11-20 07:31:36.530256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:12.358 [2024-11-20 07:31:36.530267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:12.358 [2024-11-20 07:31:36.530278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:12.358 [2024-11-20 07:31:36.530289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.358 [2024-11-20 07:31:36.530299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:12.358 [2024-11-20 07:31:36.530316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:34:12.358 [2024-11-20 07:31:36.530326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.570917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.571147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:12.617 [2024-11-20 07:31:36.571267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.529 ms 00:34:12.617 [2024-11-20 07:31:36.571307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.571552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.571661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:12.617 [2024-11-20 07:31:36.571749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:12.617 [2024-11-20 07:31:36.571786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.631948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.632131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:12.617 [2024-11-20 07:31:36.632273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.048 ms 00:34:12.617 [2024-11-20 07:31:36.632362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.632514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.632554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:12.617 [2024-11-20 07:31:36.632632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:12.617 [2024-11-20 07:31:36.632667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.633317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.633339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:12.617 [2024-11-20 07:31:36.633352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:34:12.617 [2024-11-20 07:31:36.633379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.633506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.633521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:12.617 [2024-11-20 07:31:36.633533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:34:12.617 [2024-11-20 07:31:36.633543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.654709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.654757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:12.617 [2024-11-20 07:31:36.654774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.139 ms 00:34:12.617 [2024-11-20 07:31:36.654785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.674903] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:34:12.617 [2024-11-20 07:31:36.675082] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:12.617 [2024-11-20 07:31:36.675105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.675116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:12.617 [2024-11-20 07:31:36.675129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.166 ms 00:34:12.617 [2024-11-20 07:31:36.675139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.706078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.706255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:12.617 [2024-11-20 07:31:36.706293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.843 ms 00:34:12.617 [2024-11-20 07:31:36.706305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.725322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.725371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:12.617 [2024-11-20 07:31:36.725385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.851 ms 00:34:12.617 [2024-11-20 07:31:36.725396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.744582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.744629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:12.617 [2024-11-20 07:31:36.744643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.097 ms 00:34:12.617 [2024-11-20 07:31:36.744654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.617 [2024-11-20 07:31:36.745477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.617 [2024-11-20 07:31:36.745510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:12.617 [2024-11-20 07:31:36.745523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:34:12.617 [2024-11-20 07:31:36.745534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.875 [2024-11-20 07:31:36.835864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.876 [2024-11-20 07:31:36.835920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:12.876 [2024-11-20 07:31:36.835939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.294 ms 00:34:12.876 [2024-11-20 07:31:36.835951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.876 [2024-11-20 07:31:36.848444] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:34:12.876 [2024-11-20 07:31:36.865539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.876 [2024-11-20 07:31:36.865603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:12.876 [2024-11-20 07:31:36.865620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.453 ms 00:34:12.876 [2024-11-20 07:31:36.865632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.876 [2024-11-20 07:31:36.865790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.876 [2024-11-20 07:31:36.865808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:12.876 [2024-11-20 07:31:36.865845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:12.876 [2024-11-20 07:31:36.865855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.876 [2024-11-20 07:31:36.865926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.876 [2024-11-20 07:31:36.865939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:12.876 [2024-11-20 07:31:36.865950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:34:12.876 [2024-11-20 07:31:36.865985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.876 [2024-11-20 07:31:36.866019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.876 [2024-11-20 07:31:36.866032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:12.876 [2024-11-20 07:31:36.866047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:12.876 [2024-11-20 07:31:36.866058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.876 [2024-11-20 07:31:36.866094] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:12.876 [2024-11-20 07:31:36.866108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.876 [2024-11-20 07:31:36.866119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:12.876 [2024-11-20 07:31:36.866131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:34:12.876 [2024-11-20 07:31:36.866142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.876 [2024-11-20 07:31:36.904089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.876 [2024-11-20 07:31:36.904301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:12.876 [2024-11-20 07:31:36.904326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.917 ms 00:34:12.876 [2024-11-20 07:31:36.904339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.876 [2024-11-20 07:31:36.904473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.876 [2024-11-20 07:31:36.904487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:12.876 [2024-11-20 07:31:36.904498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:34:12.876 [2024-11-20 07:31:36.904509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.876 [2024-11-20 07:31:36.905690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:12.876 [2024-11-20 07:31:36.910285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 414.268 ms, result 0 00:34:12.876 [2024-11-20 07:31:36.911125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:12.876 [2024-11-20 07:31:36.929910] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:13.811  [2024-11-20T07:31:38.949Z] Copying: 30/256 [MB] (30 MBps) [2024-11-20T07:31:40.324Z] Copying: 61/256 [MB] (31 MBps) [2024-11-20T07:31:41.264Z] Copying: 92/256 [MB] (30 MBps) [2024-11-20T07:31:42.204Z] Copying: 123/256 [MB] (31 MBps) [2024-11-20T07:31:43.139Z] Copying: 155/256 [MB] (31 MBps) [2024-11-20T07:31:44.077Z] Copying: 186/256 [MB] (30 MBps) [2024-11-20T07:31:45.014Z] Copying: 217/256 [MB] (31 MBps) [2024-11-20T07:31:45.274Z] Copying: 248/256 [MB] (30 MBps) [2024-11-20T07:31:45.274Z] Copying: 256/256 [MB] (average 31 MBps)[2024-11-20 07:31:45.180074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:21.071 [2024-11-20 07:31:45.195199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.071 [2024-11-20 07:31:45.195242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:21.071 [2024-11-20 07:31:45.195258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:21.071 [2024-11-20 07:31:45.195285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.071 [2024-11-20 07:31:45.195309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:34:21.071 [2024-11-20 07:31:45.199904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.071 [2024-11-20 07:31:45.199941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:21.071 [2024-11-20 07:31:45.199955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.578 ms 00:34:21.071 [2024-11-20 07:31:45.199965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.071 [2024-11-20 07:31:45.201708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.071 [2024-11-20 07:31:45.201751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:21.071 [2024-11-20 07:31:45.201766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:34:21.071 [2024-11-20 07:31:45.201777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.071 [2024-11-20 07:31:45.207979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.071 [2024-11-20 07:31:45.208015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:21.071 [2024-11-20 07:31:45.208035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.181 ms 00:34:21.071 [2024-11-20 07:31:45.208045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.071 [2024-11-20 07:31:45.214158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.071 [2024-11-20 07:31:45.214318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:21.071 [2024-11-20 07:31:45.214341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.061 ms 00:34:21.071 [2024-11-20 07:31:45.214354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.071 [2024-11-20 07:31:45.251726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.071 [2024-11-20 07:31:45.251915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:21.071 [2024-11-20 07:31:45.251938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.314 ms 00:34:21.071 [2024-11-20 07:31:45.251948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.331 [2024-11-20 07:31:45.273948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.331 [2024-11-20 07:31:45.273992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:21.331 [2024-11-20 07:31:45.274021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.942 ms 00:34:21.331 [2024-11-20 07:31:45.274036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.331 [2024-11-20 07:31:45.274204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.331 [2024-11-20 07:31:45.274219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:21.331 [2024-11-20 07:31:45.274231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:34:21.331 [2024-11-20 07:31:45.274241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.331 [2024-11-20 07:31:45.313604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.331 [2024-11-20 07:31:45.313825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:21.331 [2024-11-20 07:31:45.313851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.340 ms 00:34:21.331 [2024-11-20 07:31:45.313862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.331 [2024-11-20 07:31:45.351326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.331 [2024-11-20 07:31:45.351496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:21.331 [2024-11-20 07:31:45.351518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.363 ms 00:34:21.331 [2024-11-20 07:31:45.351530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.331 [2024-11-20 07:31:45.388842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.331 [2024-11-20 07:31:45.388884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:21.331 [2024-11-20 07:31:45.388898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.211 ms 00:34:21.331 [2024-11-20 07:31:45.388909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.331 [2024-11-20 07:31:45.425640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.331 [2024-11-20 07:31:45.425680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:21.331 [2024-11-20 07:31:45.425695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.642 ms 00:34:21.331 [2024-11-20 07:31:45.425721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.331 [2024-11-20 07:31:45.425776] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:21.331 [2024-11-20 07:31:45.425802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.425994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:21.331 [2024-11-20 07:31:45.426229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:21.332 [2024-11-20 07:31:45.426954] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:21.332 [2024-11-20 07:31:45.426964] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e94c8c0-f203-44d7-914b-d7ad4a7525b4 00:34:21.332 [2024-11-20 07:31:45.426975] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:21.332 [2024-11-20 07:31:45.426985] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:21.332 [2024-11-20 07:31:45.426995] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:21.332 [2024-11-20 07:31:45.427006] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:21.332 [2024-11-20 07:31:45.427015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:21.332 [2024-11-20 07:31:45.427026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:21.332 [2024-11-20 07:31:45.427036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:21.332 [2024-11-20 07:31:45.427046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:21.333 [2024-11-20 07:31:45.427054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:21.333 [2024-11-20 07:31:45.427065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.333 [2024-11-20 07:31:45.427075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:21.333 [2024-11-20 07:31:45.427090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:34:21.333 [2024-11-20 07:31:45.427100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.333 [2024-11-20 07:31:45.447866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.333 [2024-11-20 07:31:45.447901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:21.333 [2024-11-20 07:31:45.447915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.744 ms 00:34:21.333 [2024-11-20 07:31:45.447925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.333 [2024-11-20 07:31:45.448480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.333 [2024-11-20 07:31:45.448501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:21.333 [2024-11-20 07:31:45.448512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:34:21.333 [2024-11-20 07:31:45.448522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.333 [2024-11-20 07:31:45.505404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.333 [2024-11-20 07:31:45.505445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:21.333 [2024-11-20 07:31:45.505459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.333 [2024-11-20 07:31:45.505471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.333 [2024-11-20 07:31:45.505567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.333 [2024-11-20 07:31:45.505585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:21.333 [2024-11-20 07:31:45.505597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.333 [2024-11-20 07:31:45.505607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.333 [2024-11-20 07:31:45.505657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.333 [2024-11-20 07:31:45.505674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:21.333 [2024-11-20 07:31:45.505685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.333 [2024-11-20 07:31:45.505695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.333 [2024-11-20 07:31:45.505715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.333 [2024-11-20 07:31:45.505726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:21.333 [2024-11-20 07:31:45.505740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.333 [2024-11-20 07:31:45.505750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.592 [2024-11-20 07:31:45.635351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.592 [2024-11-20 07:31:45.635595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:21.592 [2024-11-20 07:31:45.635620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.592 [2024-11-20 07:31:45.635631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.592 [2024-11-20 07:31:45.741676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.592 [2024-11-20 07:31:45.741746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:21.592 [2024-11-20 07:31:45.741769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.592 [2024-11-20 07:31:45.741779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.592 [2024-11-20 07:31:45.741899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.592 [2024-11-20 07:31:45.741913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:21.592 [2024-11-20 07:31:45.741941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.592 [2024-11-20 07:31:45.741952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.592 [2024-11-20 07:31:45.741984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.592 [2024-11-20 07:31:45.742004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:21.592 [2024-11-20 07:31:45.742016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.592 [2024-11-20 07:31:45.742032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.592 [2024-11-20 07:31:45.742153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.592 [2024-11-20 07:31:45.742168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:21.592 [2024-11-20 07:31:45.742180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.592 [2024-11-20 07:31:45.742191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.592 [2024-11-20 07:31:45.742230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.592 [2024-11-20 07:31:45.742244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:21.592 [2024-11-20 07:31:45.742255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.592 [2024-11-20 07:31:45.742265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.592 [2024-11-20 07:31:45.742313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.592 [2024-11-20 07:31:45.742342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:21.592 [2024-11-20 07:31:45.742354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.592 [2024-11-20 07:31:45.742366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.593 [2024-11-20 07:31:45.742416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:21.593 [2024-11-20 07:31:45.742430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:21.593 [2024-11-20 07:31:45.742441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:21.593 [2024-11-20 07:31:45.742457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.593 [2024-11-20 07:31:45.742613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.391 ms, result 0 00:34:22.972 00:34:22.972 00:34:22.972 07:31:46 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76573 00:34:22.972 07:31:46 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:34:22.972 07:31:46 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76573 00:34:22.972 07:31:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76573 ']' 00:34:22.972 07:31:46 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.972 07:31:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.972 07:31:46 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.972 07:31:46 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.972 07:31:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:34:22.972 [2024-11-20 07:31:47.131624] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:22.972 [2024-11-20 07:31:47.132036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76573 ] 00:34:23.232 [2024-11-20 07:31:47.321695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.491 [2024-11-20 07:31:47.436304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.427 07:31:48 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:24.427 07:31:48 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:34:24.427 07:31:48 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:34:24.427 [2024-11-20 07:31:48.572727] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:24.427 [2024-11-20 07:31:48.572961] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:24.687 [2024-11-20 07:31:48.758186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.758422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:24.687 [2024-11-20 07:31:48.758460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:24.687 [2024-11-20 07:31:48.758473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.762563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.762602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:24.687 [2024-11-20 07:31:48.762618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:34:24.687 [2024-11-20 07:31:48.762628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.762737] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:24.687 [2024-11-20 07:31:48.763866] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:24.687 [2024-11-20 07:31:48.763903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.763915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:24.687 [2024-11-20 07:31:48.763928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:34:24.687 [2024-11-20 07:31:48.763939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.765459] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:24.687 [2024-11-20 07:31:48.785476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.785521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:24.687 [2024-11-20 07:31:48.785538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.022 ms 00:34:24.687 [2024-11-20 07:31:48.785552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.785656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.785679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:24.687 [2024-11-20 07:31:48.785691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:24.687 [2024-11-20 07:31:48.785706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.792668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.792716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:24.687 [2024-11-20 07:31:48.792746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:34:24.687 [2024-11-20 07:31:48.792762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.792933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.792955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:24.687 [2024-11-20 07:31:48.792967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:34:24.687 [2024-11-20 07:31:48.792990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.793029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.793045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:24.687 [2024-11-20 07:31:48.793056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:34:24.687 [2024-11-20 07:31:48.793072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.793100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:34:24.687 [2024-11-20 07:31:48.798212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.798245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:24.687 [2024-11-20 07:31:48.798263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.114 ms 00:34:24.687 [2024-11-20 07:31:48.798274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.798358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.798371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:24.687 [2024-11-20 07:31:48.798387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:24.687 [2024-11-20 07:31:48.798403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.798431] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:24.687 [2024-11-20 07:31:48.798455] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:24.687 [2024-11-20 07:31:48.798509] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:24.687 [2024-11-20 07:31:48.798531] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:24.687 [2024-11-20 07:31:48.798629] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:24.687 [2024-11-20 07:31:48.798643] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:24.687 [2024-11-20 07:31:48.798671] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:24.687 [2024-11-20 07:31:48.798685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:24.687 [2024-11-20 07:31:48.798703] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:24.687 [2024-11-20 07:31:48.798715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:34:24.687 [2024-11-20 07:31:48.798730] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:24.687 [2024-11-20 07:31:48.798741] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:24.687 [2024-11-20 07:31:48.798760] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:24.687 [2024-11-20 07:31:48.798772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.798788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:24.687 [2024-11-20 07:31:48.798800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:34:24.687 [2024-11-20 07:31:48.798831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.798916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.687 [2024-11-20 07:31:48.798950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:24.687 [2024-11-20 07:31:48.798961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:24.687 [2024-11-20 07:31:48.798976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.687 [2024-11-20 07:31:48.799068] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:24.687 [2024-11-20 07:31:48.799086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:24.687 [2024-11-20 07:31:48.799097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:24.687 [2024-11-20 07:31:48.799114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.687 [2024-11-20 07:31:48.799124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:24.687 [2024-11-20 07:31:48.799138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:24.687 [2024-11-20 07:31:48.799148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:34:24.687 [2024-11-20 07:31:48.799175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:24.687 [2024-11-20 07:31:48.799185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:34:24.687 [2024-11-20 07:31:48.799199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:24.687 [2024-11-20 07:31:48.799209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:24.687 [2024-11-20 07:31:48.799224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:34:24.688 [2024-11-20 07:31:48.799233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:24.688 [2024-11-20 07:31:48.799249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:24.688 [2024-11-20 07:31:48.799259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:34:24.688 [2024-11-20 07:31:48.799273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:24.688 [2024-11-20 07:31:48.799298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:34:24.688 [2024-11-20 07:31:48.799307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:24.688 [2024-11-20 07:31:48.799343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:24.688 [2024-11-20 07:31:48.799368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:24.688 [2024-11-20 07:31:48.799387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:24.688 [2024-11-20 07:31:48.799411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:24.688 [2024-11-20 07:31:48.799421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:24.688 [2024-11-20 07:31:48.799445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:24.688 [2024-11-20 07:31:48.799459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:24.688 [2024-11-20 07:31:48.799483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:24.688 [2024-11-20 07:31:48.799492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:24.688 [2024-11-20 07:31:48.799518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:24.688 [2024-11-20 07:31:48.799532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:34:24.688 [2024-11-20 07:31:48.799541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:24.688 [2024-11-20 07:31:48.799556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:24.688 [2024-11-20 07:31:48.799565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:34:24.688 [2024-11-20 07:31:48.799586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:24.688 [2024-11-20 07:31:48.799609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:34:24.688 [2024-11-20 07:31:48.799619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799633] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:24.688 [2024-11-20 07:31:48.799649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:24.688 [2024-11-20 07:31:48.799665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:24.688 [2024-11-20 07:31:48.799675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:24.688 [2024-11-20 07:31:48.799690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:24.688 [2024-11-20 07:31:48.799700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:24.688 [2024-11-20 07:31:48.799715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:24.688 [2024-11-20 07:31:48.799724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:24.688 [2024-11-20 07:31:48.799738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:24.688 [2024-11-20 07:31:48.799748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:24.688 [2024-11-20 07:31:48.799764] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:24.688 [2024-11-20 07:31:48.799777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:24.688 [2024-11-20 07:31:48.799798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:34:24.688 [2024-11-20 07:31:48.799809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:34:24.688 [2024-11-20 07:31:48.799836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:34:24.688 [2024-11-20 07:31:48.799847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:34:24.688 [2024-11-20 07:31:48.799862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:34:24.688 [2024-11-20 07:31:48.799873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:34:24.688 [2024-11-20 07:31:48.799888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:34:24.688 [2024-11-20 07:31:48.799898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:34:24.688 [2024-11-20 07:31:48.799913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:34:24.688 [2024-11-20 07:31:48.799924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:34:24.688 [2024-11-20 07:31:48.799937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:34:24.688 [2024-11-20 07:31:48.799947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:34:24.688 [2024-11-20 07:31:48.799960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:34:24.688 [2024-11-20 07:31:48.799971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:34:24.688 [2024-11-20 07:31:48.799984] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:24.688 [2024-11-20 07:31:48.799995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:24.688 [2024-11-20 07:31:48.800011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:24.688 [2024-11-20 07:31:48.800022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:24.688 [2024-11-20 07:31:48.800034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:24.688 [2024-11-20 07:31:48.800045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:24.688 [2024-11-20 07:31:48.800059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.688 [2024-11-20 07:31:48.800070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:24.688 [2024-11-20 07:31:48.800084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:34:24.688 [2024-11-20 07:31:48.800096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.688 [2024-11-20 07:31:48.842148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.688 [2024-11-20 07:31:48.842199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:24.688 [2024-11-20 07:31:48.842222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.984 ms 00:34:24.688 [2024-11-20 07:31:48.842238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.689 [2024-11-20 07:31:48.842399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.689 [2024-11-20 07:31:48.842413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:24.689 [2024-11-20 07:31:48.842430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:24.689 [2024-11-20 07:31:48.842440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:48.890269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:48.890324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:24.948 [2024-11-20 07:31:48.890344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.794 ms 00:34:24.948 [2024-11-20 07:31:48.890355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:48.890475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:48.890488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:24.948 [2024-11-20 07:31:48.890504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:24.948 [2024-11-20 07:31:48.890514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:48.890981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:48.891001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:24.948 [2024-11-20 07:31:48.891017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:34:24.948 [2024-11-20 07:31:48.891028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:48.891153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:48.891167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:24.948 [2024-11-20 07:31:48.891182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:34:24.948 [2024-11-20 07:31:48.891193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:48.913595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:48.913806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:24.948 [2024-11-20 07:31:48.913848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.369 ms 00:34:24.948 [2024-11-20 07:31:48.913860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:48.934589] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:34:24.948 [2024-11-20 07:31:48.934630] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:24.948 [2024-11-20 07:31:48.934657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:48.934670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:24.948 [2024-11-20 07:31:48.934687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.652 ms 00:34:24.948 [2024-11-20 07:31:48.934698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:48.965409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:48.965572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:24.948 [2024-11-20 07:31:48.965605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.615 ms 00:34:24.948 [2024-11-20 07:31:48.965617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:48.984553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:48.984711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:24.948 [2024-11-20 07:31:48.984747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.800 ms 00:34:24.948 [2024-11-20 07:31:48.984758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:49.003205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:49.003372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:24.948 [2024-11-20 07:31:49.003404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.319 ms 00:34:24.948 [2024-11-20 07:31:49.003415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:49.004349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:49.004381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:24.948 [2024-11-20 07:31:49.004400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:34:24.948 [2024-11-20 07:31:49.004411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:49.103142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:49.103227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:24.948 [2024-11-20 07:31:49.103254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.692 ms 00:34:24.948 [2024-11-20 07:31:49.103265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:49.114831] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:34:24.948 [2024-11-20 07:31:49.131602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:49.131696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:24.948 [2024-11-20 07:31:49.131712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.214 ms 00:34:24.948 [2024-11-20 07:31:49.131729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:49.131891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:49.131912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:24.948 [2024-11-20 07:31:49.131925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:24.948 [2024-11-20 07:31:49.131941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:49.132001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:49.132019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:24.948 [2024-11-20 07:31:49.132030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:24.948 [2024-11-20 07:31:49.132051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:49.132077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:49.132093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:24.948 [2024-11-20 07:31:49.132104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:24.948 [2024-11-20 07:31:49.132123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:24.948 [2024-11-20 07:31:49.132162] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:24.948 [2024-11-20 07:31:49.132186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:24.948 [2024-11-20 07:31:49.132202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:24.948 [2024-11-20 07:31:49.132217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:24.948 [2024-11-20 07:31:49.132227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.206 [2024-11-20 07:31:49.171294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.206 [2024-11-20 07:31:49.171344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:25.206 [2024-11-20 07:31:49.171368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.022 ms 00:34:25.206 [2024-11-20 07:31:49.171380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.206 [2024-11-20 07:31:49.171529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.206 [2024-11-20 07:31:49.171543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:25.206 [2024-11-20 07:31:49.171566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:34:25.206 [2024-11-20 07:31:49.171577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.206 [2024-11-20 07:31:49.172621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:25.206 [2024-11-20 07:31:49.177697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 414.036 ms, result 0 00:34:25.206 [2024-11-20 07:31:49.178929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:25.206 Some configs were skipped because the RPC state that can call them passed over. 00:34:25.206 07:31:49 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:34:25.464 [2024-11-20 07:31:49.471718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.464 [2024-11-20 07:31:49.472033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:34:25.464 [2024-11-20 07:31:49.472185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:34:25.464 [2024-11-20 07:31:49.472245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.464 [2024-11-20 07:31:49.472337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.093 ms, result 0 00:34:25.464 true 00:34:25.464 07:31:49 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:34:25.724 [2024-11-20 07:31:49.747722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.724 [2024-11-20 07:31:49.747775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:34:25.724 [2024-11-20 07:31:49.747800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:34:25.724 [2024-11-20 07:31:49.747812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.724 [2024-11-20 07:31:49.747883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.305 ms, result 0 00:34:25.724 true 00:34:25.724 07:31:49 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76573 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76573 ']' 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76573 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76573 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76573' 00:34:25.724 killing process with pid 76573 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76573 00:34:25.724 07:31:49 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76573 00:34:27.173 [2024-11-20 07:31:50.964380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.173 [2024-11-20 07:31:50.964679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:27.173 [2024-11-20 07:31:50.964706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:27.173 [2024-11-20 07:31:50.964719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.173 [2024-11-20 07:31:50.964759] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:34:27.173 [2024-11-20 07:31:50.969229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.173 [2024-11-20 07:31:50.969261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:27.173 [2024-11-20 07:31:50.969279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.449 ms 00:34:27.173 [2024-11-20 07:31:50.969290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.173 [2024-11-20 07:31:50.969565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.173 [2024-11-20 07:31:50.969579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:27.173 [2024-11-20 07:31:50.969592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:34:27.173 [2024-11-20 07:31:50.969602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.173 [2024-11-20 07:31:50.973003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.173 [2024-11-20 07:31:50.973037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:27.173 [2024-11-20 07:31:50.973055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.377 ms 00:34:27.173 [2024-11-20 07:31:50.973066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.173 [2024-11-20 07:31:50.978932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.173 [2024-11-20 07:31:50.979088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:27.174 [2024-11-20 07:31:50.979115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.823 ms 00:34:27.174 [2024-11-20 07:31:50.979125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.174 [2024-11-20 07:31:50.994885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.174 [2024-11-20 07:31:50.994922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:27.174 [2024-11-20 07:31:50.994942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.695 ms 00:34:27.174 [2024-11-20 07:31:50.994962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.174 [2024-11-20 07:31:51.005197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.174 [2024-11-20 07:31:51.005354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:27.174 [2024-11-20 07:31:51.005381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.160 ms 00:34:27.174 [2024-11-20 07:31:51.005392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.174 [2024-11-20 07:31:51.005567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.174 [2024-11-20 07:31:51.005581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:27.174 [2024-11-20 07:31:51.005595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:34:27.174 [2024-11-20 07:31:51.005605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.174 [2024-11-20 07:31:51.021093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.174 [2024-11-20 07:31:51.021127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:27.174 [2024-11-20 07:31:51.021144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.464 ms 00:34:27.174 [2024-11-20 07:31:51.021153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.174 [2024-11-20 07:31:51.036472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.174 [2024-11-20 07:31:51.036504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:27.174 [2024-11-20 07:31:51.036545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.260 ms 00:34:27.174 [2024-11-20 07:31:51.036555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.174 [2024-11-20 07:31:51.051264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.174 [2024-11-20 07:31:51.051400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:27.174 [2024-11-20 07:31:51.051432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.651 ms 00:34:27.174 [2024-11-20 07:31:51.051443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.174 [2024-11-20 07:31:51.066115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.174 [2024-11-20 07:31:51.066250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:27.174 [2024-11-20 07:31:51.066279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.586 ms 00:34:27.174 [2024-11-20 07:31:51.066289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.174 [2024-11-20 07:31:51.066368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:27.174 [2024-11-20 07:31:51.066388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.066998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:27.174 [2024-11-20 07:31:51.067193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:27.175 [2024-11-20 07:31:51.067826] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:27.175 [2024-11-20 07:31:51.067846] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e94c8c0-f203-44d7-914b-d7ad4a7525b4 00:34:27.175 [2024-11-20 07:31:51.067874] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:27.175 [2024-11-20 07:31:51.067890] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:27.175 [2024-11-20 07:31:51.067900] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:27.175 [2024-11-20 07:31:51.067915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:27.175 [2024-11-20 07:31:51.067925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:27.175 [2024-11-20 07:31:51.067940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:27.175 [2024-11-20 07:31:51.067950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:27.175 [2024-11-20 07:31:51.067964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:27.175 [2024-11-20 07:31:51.067973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:27.175 [2024-11-20 07:31:51.067988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.175 [2024-11-20 07:31:51.067999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:27.175 [2024-11-20 07:31:51.068014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:34:27.175 [2024-11-20 07:31:51.068029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.175 [2024-11-20 07:31:51.089299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.175 [2024-11-20 07:31:51.089334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:27.175 [2024-11-20 07:31:51.089358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.236 ms 00:34:27.175 [2024-11-20 07:31:51.089369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.175 [2024-11-20 07:31:51.090002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:27.175 [2024-11-20 07:31:51.090046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:27.175 [2024-11-20 07:31:51.090071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:34:27.175 [2024-11-20 07:31:51.090082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.175 [2024-11-20 07:31:51.162833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.175 [2024-11-20 07:31:51.162881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:27.175 [2024-11-20 07:31:51.162902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.175 [2024-11-20 07:31:51.162913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.175 [2024-11-20 07:31:51.163050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.175 [2024-11-20 07:31:51.163064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:27.175 [2024-11-20 07:31:51.163086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.175 [2024-11-20 07:31:51.163097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.175 [2024-11-20 07:31:51.163161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.175 [2024-11-20 07:31:51.163175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:27.175 [2024-11-20 07:31:51.163195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.175 [2024-11-20 07:31:51.163206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.175 [2024-11-20 07:31:51.163231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.175 [2024-11-20 07:31:51.163242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:27.176 [2024-11-20 07:31:51.163257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.176 [2024-11-20 07:31:51.163273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.176 [2024-11-20 07:31:51.291876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.176 [2024-11-20 07:31:51.292062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:27.176 [2024-11-20 07:31:51.292163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.176 [2024-11-20 07:31:51.292203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.435 [2024-11-20 07:31:51.397524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.435 [2024-11-20 07:31:51.397753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:27.435 [2024-11-20 07:31:51.397902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.435 [2024-11-20 07:31:51.397953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.435 [2024-11-20 07:31:51.398130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.435 [2024-11-20 07:31:51.398240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:27.435 [2024-11-20 07:31:51.398309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.435 [2024-11-20 07:31:51.398345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.435 [2024-11-20 07:31:51.398410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.435 [2024-11-20 07:31:51.398448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:27.435 [2024-11-20 07:31:51.398488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.435 [2024-11-20 07:31:51.398598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.435 [2024-11-20 07:31:51.398839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.435 [2024-11-20 07:31:51.398858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:27.435 [2024-11-20 07:31:51.398877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.435 [2024-11-20 07:31:51.398889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.435 [2024-11-20 07:31:51.398942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.435 [2024-11-20 07:31:51.398956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:27.435 [2024-11-20 07:31:51.398973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.435 [2024-11-20 07:31:51.398984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.435 [2024-11-20 07:31:51.399037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.435 [2024-11-20 07:31:51.399050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:27.435 [2024-11-20 07:31:51.399072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.435 [2024-11-20 07:31:51.399083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.435 [2024-11-20 07:31:51.399148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:27.435 [2024-11-20 07:31:51.399161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:27.435 [2024-11-20 07:31:51.399178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:27.435 [2024-11-20 07:31:51.399188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:27.435 [2024-11-20 07:31:51.399338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 434.929 ms, result 0 00:34:28.372 07:31:52 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:34:28.372 07:31:52 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:28.372 [2024-11-20 07:31:52.571189] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:28.372 [2024-11-20 07:31:52.571632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76648 ] 00:34:28.632 [2024-11-20 07:31:52.765926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.891 [2024-11-20 07:31:52.886778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.150 [2024-11-20 07:31:53.244895] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:29.150 [2024-11-20 07:31:53.244978] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:29.410 [2024-11-20 07:31:53.407555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.407621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:29.410 [2024-11-20 07:31:53.407637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:29.410 [2024-11-20 07:31:53.407657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.411435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.411477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:29.410 [2024-11-20 07:31:53.411490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.757 ms 00:34:29.410 [2024-11-20 07:31:53.411516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.411638] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:29.410 [2024-11-20 07:31:53.412736] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:29.410 [2024-11-20 07:31:53.412772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.412784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:29.410 [2024-11-20 07:31:53.412795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:34:29.410 [2024-11-20 07:31:53.412806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.414325] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:29.410 [2024-11-20 07:31:53.435408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.435465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:29.410 [2024-11-20 07:31:53.435481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.083 ms 00:34:29.410 [2024-11-20 07:31:53.435492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.435618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.435633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:29.410 [2024-11-20 07:31:53.435645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:34:29.410 [2024-11-20 07:31:53.435655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.442963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.443000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:29.410 [2024-11-20 07:31:53.443013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.260 ms 00:34:29.410 [2024-11-20 07:31:53.443024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.443140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.443156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:29.410 [2024-11-20 07:31:53.443168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:29.410 [2024-11-20 07:31:53.443178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.443210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.443225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:29.410 [2024-11-20 07:31:53.443236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:29.410 [2024-11-20 07:31:53.443246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.443273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:34:29.410 [2024-11-20 07:31:53.448096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.448134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:29.410 [2024-11-20 07:31:53.448147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.831 ms 00:34:29.410 [2024-11-20 07:31:53.448157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.448242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.410 [2024-11-20 07:31:53.448255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:29.410 [2024-11-20 07:31:53.448266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:29.410 [2024-11-20 07:31:53.448276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.410 [2024-11-20 07:31:53.448301] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:29.411 [2024-11-20 07:31:53.448328] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:29.411 [2024-11-20 07:31:53.448366] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:29.411 [2024-11-20 07:31:53.448385] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:29.411 [2024-11-20 07:31:53.448479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:29.411 [2024-11-20 07:31:53.448493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:29.411 [2024-11-20 07:31:53.448506] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:29.411 [2024-11-20 07:31:53.448519] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:29.411 [2024-11-20 07:31:53.448536] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:29.411 [2024-11-20 07:31:53.448547] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:34:29.411 [2024-11-20 07:31:53.448558] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:29.411 [2024-11-20 07:31:53.448567] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:29.411 [2024-11-20 07:31:53.448577] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:29.411 [2024-11-20 07:31:53.448588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.411 [2024-11-20 07:31:53.448599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:29.411 [2024-11-20 07:31:53.448609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:34:29.411 [2024-11-20 07:31:53.448619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.411 [2024-11-20 07:31:53.448696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.411 [2024-11-20 07:31:53.448708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:29.411 [2024-11-20 07:31:53.448722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:29.411 [2024-11-20 07:31:53.448732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.411 [2024-11-20 07:31:53.448848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:29.411 [2024-11-20 07:31:53.448863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:29.411 [2024-11-20 07:31:53.448874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:29.411 [2024-11-20 07:31:53.448885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:29.411 [2024-11-20 07:31:53.448896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:29.411 [2024-11-20 07:31:53.448906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:29.411 [2024-11-20 07:31:53.448915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:34:29.411 [2024-11-20 07:31:53.448925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:29.411 [2024-11-20 07:31:53.448935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:34:29.411 [2024-11-20 07:31:53.448944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:29.411 [2024-11-20 07:31:53.448954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:29.411 [2024-11-20 07:31:53.448965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:34:29.411 [2024-11-20 07:31:53.448975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:29.411 [2024-11-20 07:31:53.448996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:29.411 [2024-11-20 07:31:53.449006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:34:29.411 [2024-11-20 07:31:53.449016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:29.411 [2024-11-20 07:31:53.449035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:34:29.411 [2024-11-20 07:31:53.449045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:29.411 [2024-11-20 07:31:53.449064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:29.411 [2024-11-20 07:31:53.449084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:29.411 [2024-11-20 07:31:53.449100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:29.411 [2024-11-20 07:31:53.449118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:29.411 [2024-11-20 07:31:53.449128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:29.411 [2024-11-20 07:31:53.449146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:29.411 [2024-11-20 07:31:53.449156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:29.411 [2024-11-20 07:31:53.449175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:29.411 [2024-11-20 07:31:53.449184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:29.411 [2024-11-20 07:31:53.449203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:29.411 [2024-11-20 07:31:53.449212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:34:29.411 [2024-11-20 07:31:53.449222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:29.411 [2024-11-20 07:31:53.449231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:29.411 [2024-11-20 07:31:53.449240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:34:29.411 [2024-11-20 07:31:53.449250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:29.411 [2024-11-20 07:31:53.449268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:34:29.411 [2024-11-20 07:31:53.449277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449287] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:29.411 [2024-11-20 07:31:53.449298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:29.411 [2024-11-20 07:31:53.449309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:29.411 [2024-11-20 07:31:53.449323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:29.411 [2024-11-20 07:31:53.449333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:29.411 [2024-11-20 07:31:53.449342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:29.411 [2024-11-20 07:31:53.449352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:29.411 [2024-11-20 07:31:53.449361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:29.411 [2024-11-20 07:31:53.449372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:29.411 [2024-11-20 07:31:53.449381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:29.411 [2024-11-20 07:31:53.449392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:29.411 [2024-11-20 07:31:53.449405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:29.411 [2024-11-20 07:31:53.449417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:34:29.411 [2024-11-20 07:31:53.449428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:34:29.411 [2024-11-20 07:31:53.449439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:34:29.411 [2024-11-20 07:31:53.449449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:34:29.411 [2024-11-20 07:31:53.449459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:34:29.411 [2024-11-20 07:31:53.449470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:34:29.411 [2024-11-20 07:31:53.449497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:34:29.411 [2024-11-20 07:31:53.449508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:34:29.411 [2024-11-20 07:31:53.449520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:34:29.411 [2024-11-20 07:31:53.449536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:34:29.412 [2024-11-20 07:31:53.449548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:34:29.412 [2024-11-20 07:31:53.449559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:34:29.412 [2024-11-20 07:31:53.449571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:34:29.412 [2024-11-20 07:31:53.449583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:34:29.412 [2024-11-20 07:31:53.449595] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:29.412 [2024-11-20 07:31:53.449618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:29.412 [2024-11-20 07:31:53.449630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:29.412 [2024-11-20 07:31:53.449642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:29.412 [2024-11-20 07:31:53.449652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:29.412 [2024-11-20 07:31:53.449663] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:29.412 [2024-11-20 07:31:53.449675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.449685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:29.412 [2024-11-20 07:31:53.449700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:34:29.412 [2024-11-20 07:31:53.449710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.412 [2024-11-20 07:31:53.490493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.490554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:29.412 [2024-11-20 07:31:53.490571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.721 ms 00:34:29.412 [2024-11-20 07:31:53.490583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.412 [2024-11-20 07:31:53.490760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.490779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:29.412 [2024-11-20 07:31:53.490790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:34:29.412 [2024-11-20 07:31:53.490801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.412 [2024-11-20 07:31:53.546255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.546508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:29.412 [2024-11-20 07:31:53.546534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.408 ms 00:34:29.412 [2024-11-20 07:31:53.546553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.412 [2024-11-20 07:31:53.546697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.546711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:29.412 [2024-11-20 07:31:53.546723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:29.412 [2024-11-20 07:31:53.546733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.412 [2024-11-20 07:31:53.547232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.547247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:29.412 [2024-11-20 07:31:53.547259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:34:29.412 [2024-11-20 07:31:53.547275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.412 [2024-11-20 07:31:53.547399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.547413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:29.412 [2024-11-20 07:31:53.547424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:34:29.412 [2024-11-20 07:31:53.547434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.412 [2024-11-20 07:31:53.567790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.567855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:29.412 [2024-11-20 07:31:53.567872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.331 ms 00:34:29.412 [2024-11-20 07:31:53.567883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.412 [2024-11-20 07:31:53.588499] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:34:29.412 [2024-11-20 07:31:53.588562] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:29.412 [2024-11-20 07:31:53.588581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.412 [2024-11-20 07:31:53.588593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:29.412 [2024-11-20 07:31:53.588607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.539 ms 00:34:29.412 [2024-11-20 07:31:53.588618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.672 [2024-11-20 07:31:53.620720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.672 [2024-11-20 07:31:53.620799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:29.672 [2024-11-20 07:31:53.620829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.977 ms 00:34:29.672 [2024-11-20 07:31:53.620841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.672 [2024-11-20 07:31:53.640909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.672 [2024-11-20 07:31:53.641142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:29.672 [2024-11-20 07:31:53.641167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.944 ms 00:34:29.672 [2024-11-20 07:31:53.641177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.672 [2024-11-20 07:31:53.660135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.672 [2024-11-20 07:31:53.660182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:29.672 [2024-11-20 07:31:53.660197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.811 ms 00:34:29.672 [2024-11-20 07:31:53.660208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.672 [2024-11-20 07:31:53.661074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.672 [2024-11-20 07:31:53.661106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:29.672 [2024-11-20 07:31:53.661118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:34:29.672 [2024-11-20 07:31:53.661129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.672 [2024-11-20 07:31:53.753905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.672 [2024-11-20 07:31:53.753980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:29.672 [2024-11-20 07:31:53.753998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.740 ms 00:34:29.672 [2024-11-20 07:31:53.754010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.673 [2024-11-20 07:31:53.766458] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:34:29.673 [2024-11-20 07:31:53.783383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.673 [2024-11-20 07:31:53.783661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:29.673 [2024-11-20 07:31:53.783690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.202 ms 00:34:29.673 [2024-11-20 07:31:53.783701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.673 [2024-11-20 07:31:53.783891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.673 [2024-11-20 07:31:53.783907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:29.673 [2024-11-20 07:31:53.783920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:29.673 [2024-11-20 07:31:53.783930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.673 [2024-11-20 07:31:53.783991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.673 [2024-11-20 07:31:53.784003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:29.673 [2024-11-20 07:31:53.784014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:29.673 [2024-11-20 07:31:53.784025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.673 [2024-11-20 07:31:53.784054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.673 [2024-11-20 07:31:53.784069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:29.673 [2024-11-20 07:31:53.784080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:29.673 [2024-11-20 07:31:53.784091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.673 [2024-11-20 07:31:53.784128] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:29.673 [2024-11-20 07:31:53.784141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.673 [2024-11-20 07:31:53.784151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:29.673 [2024-11-20 07:31:53.784161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:29.673 [2024-11-20 07:31:53.784171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.673 [2024-11-20 07:31:53.823485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.673 [2024-11-20 07:31:53.823551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:29.673 [2024-11-20 07:31:53.823569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.288 ms 00:34:29.673 [2024-11-20 07:31:53.823580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.673 [2024-11-20 07:31:53.823741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.673 [2024-11-20 07:31:53.823755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:29.673 [2024-11-20 07:31:53.823768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:34:29.673 [2024-11-20 07:31:53.823778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.673 [2024-11-20 07:31:53.824975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:29.673 [2024-11-20 07:31:53.829731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.049 ms, result 0 00:34:29.673 [2024-11-20 07:31:53.830716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:29.673 [2024-11-20 07:31:53.850250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:31.052  [2024-11-20T07:31:56.190Z] Copying: 32/256 [MB] (32 MBps) [2024-11-20T07:31:57.127Z] Copying: 61/256 [MB] (28 MBps) [2024-11-20T07:31:58.142Z] Copying: 89/256 [MB] (28 MBps) [2024-11-20T07:31:59.079Z] Copying: 117/256 [MB] (28 MBps) [2024-11-20T07:32:00.026Z] Copying: 145/256 [MB] (28 MBps) [2024-11-20T07:32:00.962Z] Copying: 174/256 [MB] (28 MBps) [2024-11-20T07:32:01.901Z] Copying: 202/256 [MB] (28 MBps) [2024-11-20T07:32:02.837Z] Copying: 230/256 [MB] (27 MBps) [2024-11-20T07:32:02.837Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-20 07:32:02.769665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:38.634 [2024-11-20 07:32:02.786009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.634 [2024-11-20 07:32:02.786071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:38.634 [2024-11-20 07:32:02.786104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:38.634 [2024-11-20 07:32:02.786124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.634 [2024-11-20 07:32:02.786153] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:34:38.634 [2024-11-20 07:32:02.790821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.634 [2024-11-20 07:32:02.790864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:38.634 [2024-11-20 07:32:02.790879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:34:38.634 [2024-11-20 07:32:02.790890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.634 [2024-11-20 07:32:02.791193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.634 [2024-11-20 07:32:02.791208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:38.634 [2024-11-20 07:32:02.791220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:34:38.634 [2024-11-20 07:32:02.791247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.634 [2024-11-20 07:32:02.794486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.634 [2024-11-20 07:32:02.794521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:38.634 [2024-11-20 07:32:02.794533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:34:38.634 [2024-11-20 07:32:02.794544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.634 [2024-11-20 07:32:02.800977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.634 [2024-11-20 07:32:02.801013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:38.634 [2024-11-20 07:32:02.801027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.412 ms 00:34:38.634 [2024-11-20 07:32:02.801040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.893 [2024-11-20 07:32:02.840304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.893 [2024-11-20 07:32:02.840505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:38.893 [2024-11-20 07:32:02.840532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.164 ms 00:34:38.893 [2024-11-20 07:32:02.840543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.893 [2024-11-20 07:32:02.863021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.893 [2024-11-20 07:32:02.863195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:38.893 [2024-11-20 07:32:02.863219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.404 ms 00:34:38.893 [2024-11-20 07:32:02.863236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.893 [2024-11-20 07:32:02.863391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.893 [2024-11-20 07:32:02.863405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:38.893 [2024-11-20 07:32:02.863417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:34:38.894 [2024-11-20 07:32:02.863427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.894 [2024-11-20 07:32:02.901217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.894 [2024-11-20 07:32:02.901262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:38.894 [2024-11-20 07:32:02.901278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.757 ms 00:34:38.894 [2024-11-20 07:32:02.901289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.894 [2024-11-20 07:32:02.938072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.894 [2024-11-20 07:32:02.938166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:38.894 [2024-11-20 07:32:02.938184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.713 ms 00:34:38.894 [2024-11-20 07:32:02.938211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.894 [2024-11-20 07:32:02.975474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.894 [2024-11-20 07:32:02.975520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:38.894 [2024-11-20 07:32:02.975534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.194 ms 00:34:38.894 [2024-11-20 07:32:02.975545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.894 [2024-11-20 07:32:03.013368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.894 [2024-11-20 07:32:03.013419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:38.894 [2024-11-20 07:32:03.013434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.719 ms 00:34:38.894 [2024-11-20 07:32:03.013444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.894 [2024-11-20 07:32:03.013508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:38.894 [2024-11-20 07:32:03.013527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.013995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:38.894 [2024-11-20 07:32:03.014202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:38.895 [2024-11-20 07:32:03.014740] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:38.895 [2024-11-20 07:32:03.014752] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e94c8c0-f203-44d7-914b-d7ad4a7525b4 00:34:38.895 [2024-11-20 07:32:03.014764] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:38.895 [2024-11-20 07:32:03.014776] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:38.895 [2024-11-20 07:32:03.014787] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:38.895 [2024-11-20 07:32:03.014798] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:38.895 [2024-11-20 07:32:03.014809] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:38.895 [2024-11-20 07:32:03.014821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:38.895 [2024-11-20 07:32:03.014842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:38.895 [2024-11-20 07:32:03.014853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:38.895 [2024-11-20 07:32:03.014863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:38.895 [2024-11-20 07:32:03.014875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.895 [2024-11-20 07:32:03.014891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:38.895 [2024-11-20 07:32:03.014902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:34:38.895 [2024-11-20 07:32:03.014913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.895 [2024-11-20 07:32:03.036166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.895 [2024-11-20 07:32:03.036205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:38.895 [2024-11-20 07:32:03.036218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.227 ms 00:34:38.895 [2024-11-20 07:32:03.036229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:38.895 [2024-11-20 07:32:03.036876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:38.895 [2024-11-20 07:32:03.036899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:38.895 [2024-11-20 07:32:03.036912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:34:38.895 [2024-11-20 07:32:03.036923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.153 [2024-11-20 07:32:03.094458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.153 [2024-11-20 07:32:03.094729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:39.153 [2024-11-20 07:32:03.094756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.153 [2024-11-20 07:32:03.094769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.153 [2024-11-20 07:32:03.094911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.153 [2024-11-20 07:32:03.094926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:39.153 [2024-11-20 07:32:03.094938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.153 [2024-11-20 07:32:03.094949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.153 [2024-11-20 07:32:03.095010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.153 [2024-11-20 07:32:03.095024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:39.153 [2024-11-20 07:32:03.095036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.153 [2024-11-20 07:32:03.095047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.153 [2024-11-20 07:32:03.095067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.153 [2024-11-20 07:32:03.095084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:39.153 [2024-11-20 07:32:03.095096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.153 [2024-11-20 07:32:03.095107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.153 [2024-11-20 07:32:03.226265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.153 [2024-11-20 07:32:03.226335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:39.153 [2024-11-20 07:32:03.226351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.153 [2024-11-20 07:32:03.226362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.153 [2024-11-20 07:32:03.334685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.154 [2024-11-20 07:32:03.334970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:39.154 [2024-11-20 07:32:03.334999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.154 [2024-11-20 07:32:03.335011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.154 [2024-11-20 07:32:03.335114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.154 [2024-11-20 07:32:03.335127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:39.154 [2024-11-20 07:32:03.335139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.154 [2024-11-20 07:32:03.335150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.154 [2024-11-20 07:32:03.335181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.154 [2024-11-20 07:32:03.335194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:39.154 [2024-11-20 07:32:03.335211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.154 [2024-11-20 07:32:03.335222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.154 [2024-11-20 07:32:03.335359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.154 [2024-11-20 07:32:03.335374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:39.154 [2024-11-20 07:32:03.335386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.154 [2024-11-20 07:32:03.335396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.154 [2024-11-20 07:32:03.335437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.154 [2024-11-20 07:32:03.335450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:39.154 [2024-11-20 07:32:03.335462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.154 [2024-11-20 07:32:03.335478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.154 [2024-11-20 07:32:03.335520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.154 [2024-11-20 07:32:03.335533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:39.154 [2024-11-20 07:32:03.335544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.154 [2024-11-20 07:32:03.335555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.154 [2024-11-20 07:32:03.335614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:39.154 [2024-11-20 07:32:03.335626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:39.154 [2024-11-20 07:32:03.335640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:39.154 [2024-11-20 07:32:03.335651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:39.154 [2024-11-20 07:32:03.335787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.815 ms, result 0 00:34:40.530 00:34:40.530 00:34:40.530 07:32:04 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:34:40.530 07:32:04 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:34:40.789 07:32:04 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:41.048 [2024-11-20 07:32:05.036480] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:41.048 [2024-11-20 07:32:05.036661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76775 ] 00:34:41.048 [2024-11-20 07:32:05.222384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.307 [2024-11-20 07:32:05.333924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.566 [2024-11-20 07:32:05.705385] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:41.566 [2024-11-20 07:32:05.705636] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:41.826 [2024-11-20 07:32:05.867579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.867853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:41.826 [2024-11-20 07:32:05.867878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:41.826 [2024-11-20 07:32:05.867890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.870976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.871143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:41.826 [2024-11-20 07:32:05.871184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.055 ms 00:34:41.826 [2024-11-20 07:32:05.871196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.871400] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:41.826 [2024-11-20 07:32:05.872578] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:41.826 [2024-11-20 07:32:05.872613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.872624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:41.826 [2024-11-20 07:32:05.872636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:34:41.826 [2024-11-20 07:32:05.872646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.874195] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:41.826 [2024-11-20 07:32:05.894310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.894358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:41.826 [2024-11-20 07:32:05.894374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.116 ms 00:34:41.826 [2024-11-20 07:32:05.894386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.894497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.894512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:41.826 [2024-11-20 07:32:05.894523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:41.826 [2024-11-20 07:32:05.894534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.901643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.901683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:41.826 [2024-11-20 07:32:05.901709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.063 ms 00:34:41.826 [2024-11-20 07:32:05.901737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.901877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.901894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:41.826 [2024-11-20 07:32:05.901905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:34:41.826 [2024-11-20 07:32:05.901916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.901949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.901965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:41.826 [2024-11-20 07:32:05.901976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:41.826 [2024-11-20 07:32:05.901988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.902014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:34:41.826 [2024-11-20 07:32:05.907099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.907134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:41.826 [2024-11-20 07:32:05.907147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.093 ms 00:34:41.826 [2024-11-20 07:32:05.907157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.907228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.826 [2024-11-20 07:32:05.907241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:41.826 [2024-11-20 07:32:05.907253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:41.826 [2024-11-20 07:32:05.907263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.826 [2024-11-20 07:32:05.907288] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:41.826 [2024-11-20 07:32:05.907314] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:41.826 [2024-11-20 07:32:05.907352] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:41.826 [2024-11-20 07:32:05.907371] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:41.827 [2024-11-20 07:32:05.907463] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:41.827 [2024-11-20 07:32:05.907477] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:41.827 [2024-11-20 07:32:05.907490] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:41.827 [2024-11-20 07:32:05.907503] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:41.827 [2024-11-20 07:32:05.907529] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:41.827 [2024-11-20 07:32:05.907541] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:34:41.827 [2024-11-20 07:32:05.907552] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:41.827 [2024-11-20 07:32:05.907562] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:41.827 [2024-11-20 07:32:05.907572] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:41.827 [2024-11-20 07:32:05.907582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.827 [2024-11-20 07:32:05.907592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:41.827 [2024-11-20 07:32:05.907603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:34:41.827 [2024-11-20 07:32:05.907614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.827 [2024-11-20 07:32:05.907691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.827 [2024-11-20 07:32:05.907703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:41.827 [2024-11-20 07:32:05.907717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:41.827 [2024-11-20 07:32:05.907728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.827 [2024-11-20 07:32:05.907839] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:41.827 [2024-11-20 07:32:05.907870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:41.827 [2024-11-20 07:32:05.907882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:41.827 [2024-11-20 07:32:05.907894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.827 [2024-11-20 07:32:05.907905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:41.827 [2024-11-20 07:32:05.907916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:41.827 [2024-11-20 07:32:05.907945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:34:41.827 [2024-11-20 07:32:05.907956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:41.827 [2024-11-20 07:32:05.907967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:34:41.827 [2024-11-20 07:32:05.907978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:41.827 [2024-11-20 07:32:05.907988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:41.827 [2024-11-20 07:32:05.907999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:34:41.827 [2024-11-20 07:32:05.908010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:41.827 [2024-11-20 07:32:05.908031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:41.827 [2024-11-20 07:32:05.908042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:34:41.827 [2024-11-20 07:32:05.908053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:41.827 [2024-11-20 07:32:05.908074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:34:41.827 [2024-11-20 07:32:05.908084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:41.827 [2024-11-20 07:32:05.908104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:41.827 [2024-11-20 07:32:05.908124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:41.827 [2024-11-20 07:32:05.908135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:41.827 [2024-11-20 07:32:05.908154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:41.827 [2024-11-20 07:32:05.908165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:41.827 [2024-11-20 07:32:05.908185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:41.827 [2024-11-20 07:32:05.908195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:41.827 [2024-11-20 07:32:05.908215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:41.827 [2024-11-20 07:32:05.908225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:41.827 [2024-11-20 07:32:05.908245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:41.827 [2024-11-20 07:32:05.908255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:34:41.827 [2024-11-20 07:32:05.908265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:41.827 [2024-11-20 07:32:05.908274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:41.827 [2024-11-20 07:32:05.908285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:34:41.827 [2024-11-20 07:32:05.908294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:41.827 [2024-11-20 07:32:05.908314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:34:41.827 [2024-11-20 07:32:05.908324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908334] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:41.827 [2024-11-20 07:32:05.908347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:41.827 [2024-11-20 07:32:05.908358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:41.827 [2024-11-20 07:32:05.908372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:41.827 [2024-11-20 07:32:05.908384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:41.827 [2024-11-20 07:32:05.908395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:41.827 [2024-11-20 07:32:05.908405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:41.827 [2024-11-20 07:32:05.908415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:41.827 [2024-11-20 07:32:05.908426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:41.827 [2024-11-20 07:32:05.908436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:41.827 [2024-11-20 07:32:05.908448] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:41.827 [2024-11-20 07:32:05.908462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:41.827 [2024-11-20 07:32:05.908475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:34:41.827 [2024-11-20 07:32:05.908487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:34:41.827 [2024-11-20 07:32:05.908498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:34:41.827 [2024-11-20 07:32:05.908511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:34:41.827 [2024-11-20 07:32:05.908522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:34:41.827 [2024-11-20 07:32:05.908534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:34:41.827 [2024-11-20 07:32:05.908545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:34:41.827 [2024-11-20 07:32:05.908557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:34:41.827 [2024-11-20 07:32:05.908568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:34:41.827 [2024-11-20 07:32:05.908579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:34:41.827 [2024-11-20 07:32:05.908590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:34:41.827 [2024-11-20 07:32:05.908602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:34:41.827 [2024-11-20 07:32:05.908613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:34:41.827 [2024-11-20 07:32:05.908624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:34:41.827 [2024-11-20 07:32:05.908635] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:41.827 [2024-11-20 07:32:05.908648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:41.827 [2024-11-20 07:32:05.908659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:41.827 [2024-11-20 07:32:05.908671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:41.827 [2024-11-20 07:32:05.908684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:41.827 [2024-11-20 07:32:05.908695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:41.827 [2024-11-20 07:32:05.908707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.827 [2024-11-20 07:32:05.908718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:41.827 [2024-11-20 07:32:05.908734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:34:41.827 [2024-11-20 07:32:05.908745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.827 [2024-11-20 07:32:05.949005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.828 [2024-11-20 07:32:05.949059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:41.828 [2024-11-20 07:32:05.949075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.195 ms 00:34:41.828 [2024-11-20 07:32:05.949085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.828 [2024-11-20 07:32:05.949245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.828 [2024-11-20 07:32:05.949263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:41.828 [2024-11-20 07:32:05.949275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:34:41.828 [2024-11-20 07:32:05.949286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.828 [2024-11-20 07:32:06.003910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.828 [2024-11-20 07:32:06.003955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:41.828 [2024-11-20 07:32:06.003970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.597 ms 00:34:41.828 [2024-11-20 07:32:06.003985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.828 [2024-11-20 07:32:06.004110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.828 [2024-11-20 07:32:06.004124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:41.828 [2024-11-20 07:32:06.004136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:41.828 [2024-11-20 07:32:06.004146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.828 [2024-11-20 07:32:06.004585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.828 [2024-11-20 07:32:06.004599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:41.828 [2024-11-20 07:32:06.004611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:34:41.828 [2024-11-20 07:32:06.004627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.828 [2024-11-20 07:32:06.004747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.828 [2024-11-20 07:32:06.004762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:41.828 [2024-11-20 07:32:06.004773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:34:41.828 [2024-11-20 07:32:06.004783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:41.828 [2024-11-20 07:32:06.024714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:41.828 [2024-11-20 07:32:06.024758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:41.828 [2024-11-20 07:32:06.024773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.906 ms 00:34:41.828 [2024-11-20 07:32:06.024784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.044792] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:34:42.087 [2024-11-20 07:32:06.044836] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:42.087 [2024-11-20 07:32:06.044851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.044862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:42.087 [2024-11-20 07:32:06.044874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.918 ms 00:34:42.087 [2024-11-20 07:32:06.044885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.075524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.075577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:42.087 [2024-11-20 07:32:06.075592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.552 ms 00:34:42.087 [2024-11-20 07:32:06.075604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.094409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.094447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:42.087 [2024-11-20 07:32:06.094461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.715 ms 00:34:42.087 [2024-11-20 07:32:06.094472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.113218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.113254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:42.087 [2024-11-20 07:32:06.113267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.667 ms 00:34:42.087 [2024-11-20 07:32:06.113277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.114103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.114127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:42.087 [2024-11-20 07:32:06.114140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:34:42.087 [2024-11-20 07:32:06.114150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.204474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.204543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:42.087 [2024-11-20 07:32:06.204560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.294 ms 00:34:42.087 [2024-11-20 07:32:06.204571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.216464] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:34:42.087 [2024-11-20 07:32:06.233345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.233406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:42.087 [2024-11-20 07:32:06.233422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.635 ms 00:34:42.087 [2024-11-20 07:32:06.233433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.233597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.233613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:42.087 [2024-11-20 07:32:06.233626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:42.087 [2024-11-20 07:32:06.233636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.233694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.233706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:42.087 [2024-11-20 07:32:06.233717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:42.087 [2024-11-20 07:32:06.233728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.233756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.233771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:42.087 [2024-11-20 07:32:06.233782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:42.087 [2024-11-20 07:32:06.233792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.233850] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:42.087 [2024-11-20 07:32:06.233864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.233876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:42.087 [2024-11-20 07:32:06.233887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:34:42.087 [2024-11-20 07:32:06.233897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.271773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.271821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:42.087 [2024-11-20 07:32:06.271853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.852 ms 00:34:42.087 [2024-11-20 07:32:06.271864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.271985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.087 [2024-11-20 07:32:06.272000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:42.087 [2024-11-20 07:32:06.272012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:34:42.087 [2024-11-20 07:32:06.272022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.087 [2024-11-20 07:32:06.272923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:42.088 [2024-11-20 07:32:06.277513] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.025 ms, result 0 00:34:42.088 [2024-11-20 07:32:06.278425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:42.346 [2024-11-20 07:32:06.297975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:42.346  [2024-11-20T07:32:06.549Z] Copying: 4096/4096 [kB] (average 27 MBps)[2024-11-20 07:32:06.446696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:42.346 [2024-11-20 07:32:06.461662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.346 [2024-11-20 07:32:06.461707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:42.346 [2024-11-20 07:32:06.461724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:42.346 [2024-11-20 07:32:06.461741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.346 [2024-11-20 07:32:06.461766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:34:42.346 [2024-11-20 07:32:06.466024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.346 [2024-11-20 07:32:06.466051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:42.346 [2024-11-20 07:32:06.466064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.240 ms 00:34:42.346 [2024-11-20 07:32:06.466074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.346 [2024-11-20 07:32:06.468277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.346 [2024-11-20 07:32:06.468313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:42.346 [2024-11-20 07:32:06.468328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:34:42.346 [2024-11-20 07:32:06.468339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.346 [2024-11-20 07:32:06.471644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.346 [2024-11-20 07:32:06.471680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:42.346 [2024-11-20 07:32:06.471693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.284 ms 00:34:42.346 [2024-11-20 07:32:06.471703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.346 [2024-11-20 07:32:06.477646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.346 [2024-11-20 07:32:06.477676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:42.346 [2024-11-20 07:32:06.477688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.910 ms 00:34:42.346 [2024-11-20 07:32:06.477698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.346 [2024-11-20 07:32:06.516527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.346 [2024-11-20 07:32:06.516571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:42.346 [2024-11-20 07:32:06.516587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.780 ms 00:34:42.346 [2024-11-20 07:32:06.516598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.346 [2024-11-20 07:32:06.538442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.346 [2024-11-20 07:32:06.538501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:42.346 [2024-11-20 07:32:06.538522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.782 ms 00:34:42.346 [2024-11-20 07:32:06.538534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.346 [2024-11-20 07:32:06.538705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.346 [2024-11-20 07:32:06.538719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:42.346 [2024-11-20 07:32:06.538731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:34:42.346 [2024-11-20 07:32:06.538742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.604 [2024-11-20 07:32:06.577941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.605 [2024-11-20 07:32:06.578004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:42.605 [2024-11-20 07:32:06.578020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.165 ms 00:34:42.605 [2024-11-20 07:32:06.578031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.605 [2024-11-20 07:32:06.615799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.605 [2024-11-20 07:32:06.615851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:42.605 [2024-11-20 07:32:06.615866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.695 ms 00:34:42.605 [2024-11-20 07:32:06.615877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.605 [2024-11-20 07:32:06.652618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.605 [2024-11-20 07:32:06.652661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:42.605 [2024-11-20 07:32:06.652676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.681 ms 00:34:42.605 [2024-11-20 07:32:06.652686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.605 [2024-11-20 07:32:06.690220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.605 [2024-11-20 07:32:06.690263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:42.605 [2024-11-20 07:32:06.690278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.420 ms 00:34:42.605 [2024-11-20 07:32:06.690288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.605 [2024-11-20 07:32:06.690351] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:42.605 [2024-11-20 07:32:06.690370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.690999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:42.605 [2024-11-20 07:32:06.691010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:42.606 [2024-11-20 07:32:06.691498] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:42.606 [2024-11-20 07:32:06.691509] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e94c8c0-f203-44d7-914b-d7ad4a7525b4 00:34:42.606 [2024-11-20 07:32:06.691519] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:42.606 [2024-11-20 07:32:06.691529] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:42.606 [2024-11-20 07:32:06.691539] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:42.606 [2024-11-20 07:32:06.691549] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:42.606 [2024-11-20 07:32:06.691559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:42.606 [2024-11-20 07:32:06.691570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:42.606 [2024-11-20 07:32:06.691580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:42.606 [2024-11-20 07:32:06.691589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:42.606 [2024-11-20 07:32:06.691599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:42.606 [2024-11-20 07:32:06.691609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.606 [2024-11-20 07:32:06.691625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:42.606 [2024-11-20 07:32:06.691636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:34:42.606 [2024-11-20 07:32:06.691646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.606 [2024-11-20 07:32:06.712411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.606 [2024-11-20 07:32:06.712462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:42.606 [2024-11-20 07:32:06.712475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.742 ms 00:34:42.606 [2024-11-20 07:32:06.712486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.606 [2024-11-20 07:32:06.713086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.606 [2024-11-20 07:32:06.713105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:42.606 [2024-11-20 07:32:06.713117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:34:42.606 [2024-11-20 07:32:06.713128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.606 [2024-11-20 07:32:06.769879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.606 [2024-11-20 07:32:06.769927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:42.606 [2024-11-20 07:32:06.769942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.606 [2024-11-20 07:32:06.769953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.606 [2024-11-20 07:32:06.770073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.606 [2024-11-20 07:32:06.770094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:42.606 [2024-11-20 07:32:06.770106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.606 [2024-11-20 07:32:06.770116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.606 [2024-11-20 07:32:06.770176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.606 [2024-11-20 07:32:06.770190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:42.606 [2024-11-20 07:32:06.770201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.606 [2024-11-20 07:32:06.770211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.606 [2024-11-20 07:32:06.770231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.606 [2024-11-20 07:32:06.770246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:42.607 [2024-11-20 07:32:06.770256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.607 [2024-11-20 07:32:06.770267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:06.898222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.865 [2024-11-20 07:32:06.898285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:42.865 [2024-11-20 07:32:06.898301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.865 [2024-11-20 07:32:06.898313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:07.003482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.865 [2024-11-20 07:32:07.003558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:42.865 [2024-11-20 07:32:07.003574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.865 [2024-11-20 07:32:07.003585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:07.003684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.865 [2024-11-20 07:32:07.003697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:42.865 [2024-11-20 07:32:07.003708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.865 [2024-11-20 07:32:07.003719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:07.003749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.865 [2024-11-20 07:32:07.003759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:42.865 [2024-11-20 07:32:07.003775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.865 [2024-11-20 07:32:07.003786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:07.003920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.865 [2024-11-20 07:32:07.003935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:42.865 [2024-11-20 07:32:07.003947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.865 [2024-11-20 07:32:07.003957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:07.003995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.865 [2024-11-20 07:32:07.004008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:42.865 [2024-11-20 07:32:07.004019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.865 [2024-11-20 07:32:07.004033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:07.004071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.865 [2024-11-20 07:32:07.004082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:42.865 [2024-11-20 07:32:07.004093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.865 [2024-11-20 07:32:07.004103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:07.004150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.865 [2024-11-20 07:32:07.004161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:42.865 [2024-11-20 07:32:07.004175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.865 [2024-11-20 07:32:07.004185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.865 [2024-11-20 07:32:07.004323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.651 ms, result 0 00:34:44.238 00:34:44.238 00:34:44.238 07:32:08 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76811 00:34:44.238 07:32:08 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:34:44.238 07:32:08 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76811 00:34:44.238 07:32:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76811 ']' 00:34:44.238 07:32:08 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.238 07:32:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.238 07:32:08 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.238 07:32:08 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.238 07:32:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:34:44.238 [2024-11-20 07:32:08.219053] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:44.238 [2024-11-20 07:32:08.219223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76811 ] 00:34:44.239 [2024-11-20 07:32:08.410075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.497 [2024-11-20 07:32:08.527539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.431 07:32:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.431 07:32:09 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:34:45.431 07:32:09 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:34:45.688 [2024-11-20 07:32:09.671747] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:45.688 [2024-11-20 07:32:09.671824] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:45.688 [2024-11-20 07:32:09.858934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.688 [2024-11-20 07:32:09.858986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:45.688 [2024-11-20 07:32:09.859006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:45.688 [2024-11-20 07:32:09.859016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.688 [2024-11-20 07:32:09.862174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.688 [2024-11-20 07:32:09.862218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:45.688 [2024-11-20 07:32:09.862254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.136 ms 00:34:45.688 [2024-11-20 07:32:09.862265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.688 [2024-11-20 07:32:09.862384] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:45.688 [2024-11-20 07:32:09.863406] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:45.688 [2024-11-20 07:32:09.863437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.688 [2024-11-20 07:32:09.863448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:45.688 [2024-11-20 07:32:09.863462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:34:45.689 [2024-11-20 07:32:09.863472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.689 [2024-11-20 07:32:09.864971] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:45.689 [2024-11-20 07:32:09.884456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.689 [2024-11-20 07:32:09.884500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:45.689 [2024-11-20 07:32:09.884516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.489 ms 00:34:45.689 [2024-11-20 07:32:09.884529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.689 [2024-11-20 07:32:09.884634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.689 [2024-11-20 07:32:09.884651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:45.689 [2024-11-20 07:32:09.884663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:34:45.689 [2024-11-20 07:32:09.884676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.948 [2024-11-20 07:32:09.891556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.948 [2024-11-20 07:32:09.891596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:45.948 [2024-11-20 07:32:09.891610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.825 ms 00:34:45.948 [2024-11-20 07:32:09.891623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.948 [2024-11-20 07:32:09.891742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.948 [2024-11-20 07:32:09.891760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:45.948 [2024-11-20 07:32:09.891772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:34:45.948 [2024-11-20 07:32:09.891785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.948 [2024-11-20 07:32:09.891837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.948 [2024-11-20 07:32:09.891852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:45.948 [2024-11-20 07:32:09.891863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:34:45.948 [2024-11-20 07:32:09.891875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.948 [2024-11-20 07:32:09.891904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:34:45.948 [2024-11-20 07:32:09.896948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.948 [2024-11-20 07:32:09.896979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:45.948 [2024-11-20 07:32:09.896994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.049 ms 00:34:45.948 [2024-11-20 07:32:09.897005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.948 [2024-11-20 07:32:09.897081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.948 [2024-11-20 07:32:09.897094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:45.948 [2024-11-20 07:32:09.897108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:45.948 [2024-11-20 07:32:09.897121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.948 [2024-11-20 07:32:09.897148] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:45.948 [2024-11-20 07:32:09.897170] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:45.948 [2024-11-20 07:32:09.897217] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:45.948 [2024-11-20 07:32:09.897237] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:45.948 [2024-11-20 07:32:09.897332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:45.948 [2024-11-20 07:32:09.897345] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:45.948 [2024-11-20 07:32:09.897363] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:45.948 [2024-11-20 07:32:09.897378] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:45.948 [2024-11-20 07:32:09.897393] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:45.948 [2024-11-20 07:32:09.897405] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:34:45.948 [2024-11-20 07:32:09.897418] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:45.948 [2024-11-20 07:32:09.897428] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:45.948 [2024-11-20 07:32:09.897443] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:45.948 [2024-11-20 07:32:09.897454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.948 [2024-11-20 07:32:09.897466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:45.948 [2024-11-20 07:32:09.897476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:34:45.948 [2024-11-20 07:32:09.897488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.948 [2024-11-20 07:32:09.897569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.948 [2024-11-20 07:32:09.897587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:45.948 [2024-11-20 07:32:09.897598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:45.948 [2024-11-20 07:32:09.897611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.948 [2024-11-20 07:32:09.897705] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:45.948 [2024-11-20 07:32:09.897719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:45.948 [2024-11-20 07:32:09.897730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:45.948 [2024-11-20 07:32:09.897743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:45.948 [2024-11-20 07:32:09.897753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:45.948 [2024-11-20 07:32:09.897766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:45.948 [2024-11-20 07:32:09.897776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:34:45.948 [2024-11-20 07:32:09.897793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:45.948 [2024-11-20 07:32:09.897803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:34:45.948 [2024-11-20 07:32:09.897825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:45.948 [2024-11-20 07:32:09.897836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:45.948 [2024-11-20 07:32:09.897848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:34:45.948 [2024-11-20 07:32:09.897857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:45.948 [2024-11-20 07:32:09.897869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:45.948 [2024-11-20 07:32:09.897879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:34:45.948 [2024-11-20 07:32:09.897891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:45.948 [2024-11-20 07:32:09.897900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:45.948 [2024-11-20 07:32:09.897912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:34:45.948 [2024-11-20 07:32:09.897921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:45.948 [2024-11-20 07:32:09.897933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:45.948 [2024-11-20 07:32:09.897954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:34:45.948 [2024-11-20 07:32:09.897966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:45.948 [2024-11-20 07:32:09.897976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:45.948 [2024-11-20 07:32:09.897990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:34:45.948 [2024-11-20 07:32:09.898000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:45.948 [2024-11-20 07:32:09.898011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:45.948 [2024-11-20 07:32:09.898021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:34:45.948 [2024-11-20 07:32:09.898033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:45.948 [2024-11-20 07:32:09.898042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:45.948 [2024-11-20 07:32:09.898053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:34:45.948 [2024-11-20 07:32:09.898063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:45.948 [2024-11-20 07:32:09.898074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:45.948 [2024-11-20 07:32:09.898084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:34:45.948 [2024-11-20 07:32:09.898106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:45.948 [2024-11-20 07:32:09.898116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:45.948 [2024-11-20 07:32:09.898128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:34:45.948 [2024-11-20 07:32:09.898137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:45.948 [2024-11-20 07:32:09.898151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:45.948 [2024-11-20 07:32:09.898161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:34:45.948 [2024-11-20 07:32:09.898175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:45.948 [2024-11-20 07:32:09.898185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:45.948 [2024-11-20 07:32:09.898196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:34:45.948 [2024-11-20 07:32:09.898205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:45.948 [2024-11-20 07:32:09.898217] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:45.948 [2024-11-20 07:32:09.898227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:45.948 [2024-11-20 07:32:09.898245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:45.948 [2024-11-20 07:32:09.898255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:45.948 [2024-11-20 07:32:09.898268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:45.948 [2024-11-20 07:32:09.898278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:45.948 [2024-11-20 07:32:09.898290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:45.948 [2024-11-20 07:32:09.898300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:45.948 [2024-11-20 07:32:09.898311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:45.948 [2024-11-20 07:32:09.898321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:45.948 [2024-11-20 07:32:09.898335] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:45.948 [2024-11-20 07:32:09.898348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:45.948 [2024-11-20 07:32:09.898364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:34:45.948 [2024-11-20 07:32:09.898375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:34:45.948 [2024-11-20 07:32:09.898390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:34:45.949 [2024-11-20 07:32:09.898401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:34:45.949 [2024-11-20 07:32:09.898414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:34:45.949 [2024-11-20 07:32:09.898424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:34:45.949 [2024-11-20 07:32:09.898437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:34:45.949 [2024-11-20 07:32:09.898447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:34:45.949 [2024-11-20 07:32:09.898461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:34:45.949 [2024-11-20 07:32:09.898472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:34:45.949 [2024-11-20 07:32:09.898485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:34:45.949 [2024-11-20 07:32:09.898496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:34:45.949 [2024-11-20 07:32:09.898509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:34:45.949 [2024-11-20 07:32:09.898520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:34:45.949 [2024-11-20 07:32:09.898533] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:45.949 [2024-11-20 07:32:09.898545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:45.949 [2024-11-20 07:32:09.898562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:45.949 [2024-11-20 07:32:09.898572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:45.949 [2024-11-20 07:32:09.898586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:45.949 [2024-11-20 07:32:09.898597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:45.949 [2024-11-20 07:32:09.898610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:09.898621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:45.949 [2024-11-20 07:32:09.898634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:34:45.949 [2024-11-20 07:32:09.898644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:09.939758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:09.939803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:45.949 [2024-11-20 07:32:09.939829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.043 ms 00:34:45.949 [2024-11-20 07:32:09.939841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:09.940011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:09.940025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:45.949 [2024-11-20 07:32:09.940039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:34:45.949 [2024-11-20 07:32:09.940050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:09.986638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:09.986681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:45.949 [2024-11-20 07:32:09.986712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.551 ms 00:34:45.949 [2024-11-20 07:32:09.986724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:09.986880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:09.986895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:45.949 [2024-11-20 07:32:09.986913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:45.949 [2024-11-20 07:32:09.986925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:09.987386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:09.987404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:45.949 [2024-11-20 07:32:09.987426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:34:45.949 [2024-11-20 07:32:09.987437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:09.987563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:09.987577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:45.949 [2024-11-20 07:32:09.987593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:34:45.949 [2024-11-20 07:32:09.987604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:10.011587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:10.011634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:45.949 [2024-11-20 07:32:10.011657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.948 ms 00:34:45.949 [2024-11-20 07:32:10.011669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:10.033401] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:45.949 [2024-11-20 07:32:10.033445] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:45.949 [2024-11-20 07:32:10.033468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:10.033480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:45.949 [2024-11-20 07:32:10.033498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.619 ms 00:34:45.949 [2024-11-20 07:32:10.033508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:10.064993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:10.065045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:45.949 [2024-11-20 07:32:10.065067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.363 ms 00:34:45.949 [2024-11-20 07:32:10.065079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:10.084616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:10.084657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:45.949 [2024-11-20 07:32:10.084683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.414 ms 00:34:45.949 [2024-11-20 07:32:10.084694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:10.103249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:10.103290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:45.949 [2024-11-20 07:32:10.103309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.461 ms 00:34:45.949 [2024-11-20 07:32:10.103319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.949 [2024-11-20 07:32:10.104196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.949 [2024-11-20 07:32:10.104223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:45.949 [2024-11-20 07:32:10.104241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:34:45.949 [2024-11-20 07:32:10.104251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.208259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.207 [2024-11-20 07:32:10.208327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:46.207 [2024-11-20 07:32:10.208351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.966 ms 00:34:46.207 [2024-11-20 07:32:10.208363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.220743] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:34:46.207 [2024-11-20 07:32:10.237675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.207 [2024-11-20 07:32:10.237754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:46.207 [2024-11-20 07:32:10.237778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.139 ms 00:34:46.207 [2024-11-20 07:32:10.237794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.237931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.207 [2024-11-20 07:32:10.237952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:46.207 [2024-11-20 07:32:10.237964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:46.207 [2024-11-20 07:32:10.237979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.238035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.207 [2024-11-20 07:32:10.238052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:46.207 [2024-11-20 07:32:10.238063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:46.207 [2024-11-20 07:32:10.238078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.238119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.207 [2024-11-20 07:32:10.238137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:46.207 [2024-11-20 07:32:10.238149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:46.207 [2024-11-20 07:32:10.238165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.238210] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:46.207 [2024-11-20 07:32:10.238233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.207 [2024-11-20 07:32:10.238244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:46.207 [2024-11-20 07:32:10.238267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:46.207 [2024-11-20 07:32:10.238278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.275425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.207 [2024-11-20 07:32:10.275473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:46.207 [2024-11-20 07:32:10.275495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.104 ms 00:34:46.207 [2024-11-20 07:32:10.275506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.275642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.207 [2024-11-20 07:32:10.275657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:46.207 [2024-11-20 07:32:10.275674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:34:46.207 [2024-11-20 07:32:10.275690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.207 [2024-11-20 07:32:10.276871] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:46.207 [2024-11-20 07:32:10.281557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.451 ms, result 0 00:34:46.207 [2024-11-20 07:32:10.282906] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:46.207 Some configs were skipped because the RPC state that can call them passed over. 00:34:46.207 07:32:10 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:34:46.465 [2024-11-20 07:32:10.508406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.466 [2024-11-20 07:32:10.508481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:34:46.466 [2024-11-20 07:32:10.508499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:34:46.466 [2024-11-20 07:32:10.508517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.466 [2024-11-20 07:32:10.508587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.973 ms, result 0 00:34:46.466 true 00:34:46.466 07:32:10 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:34:46.723 [2024-11-20 07:32:10.756164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.723 [2024-11-20 07:32:10.756220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:34:46.723 [2024-11-20 07:32:10.756243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:34:46.723 [2024-11-20 07:32:10.756255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.723 [2024-11-20 07:32:10.756308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.422 ms, result 0 00:34:46.723 true 00:34:46.723 07:32:10 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76811 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76811 ']' 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76811 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76811 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.723 killing process with pid 76811 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76811' 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76811 00:34:46.723 07:32:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76811 00:34:48.124 [2024-11-20 07:32:11.968278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:11.968343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:48.124 [2024-11-20 07:32:11.968360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:48.124 [2024-11-20 07:32:11.968373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:11.968397] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:34:48.124 [2024-11-20 07:32:11.972784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:11.972831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:48.124 [2024-11-20 07:32:11.972849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.365 ms 00:34:48.124 [2024-11-20 07:32:11.972859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:11.973113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:11.973130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:48.124 [2024-11-20 07:32:11.973144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:34:48.124 [2024-11-20 07:32:11.973154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:11.976617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:11.976649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:48.124 [2024-11-20 07:32:11.976666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.439 ms 00:34:48.124 [2024-11-20 07:32:11.976676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:11.982547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:11.982590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:48.124 [2024-11-20 07:32:11.982607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.829 ms 00:34:48.124 [2024-11-20 07:32:11.982617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:11.998280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:11.998312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:48.124 [2024-11-20 07:32:11.998332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.602 ms 00:34:48.124 [2024-11-20 07:32:11.998353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:12.009095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:12.009129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:48.124 [2024-11-20 07:32:12.009150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.667 ms 00:34:48.124 [2024-11-20 07:32:12.009160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:12.009308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:12.009322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:48.124 [2024-11-20 07:32:12.009335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:34:48.124 [2024-11-20 07:32:12.009345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:12.024974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:12.025005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:48.124 [2024-11-20 07:32:12.025037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.603 ms 00:34:48.124 [2024-11-20 07:32:12.025047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:12.040629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:12.040658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:48.124 [2024-11-20 07:32:12.040697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.518 ms 00:34:48.124 [2024-11-20 07:32:12.040707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:12.055827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:12.055858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:48.124 [2024-11-20 07:32:12.055880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.053 ms 00:34:48.124 [2024-11-20 07:32:12.055891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:12.070112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.124 [2024-11-20 07:32:12.070159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:48.124 [2024-11-20 07:32:12.070179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.126 ms 00:34:48.124 [2024-11-20 07:32:12.070190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.124 [2024-11-20 07:32:12.070250] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:48.124 [2024-11-20 07:32:12.070270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:48.124 [2024-11-20 07:32:12.070618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.070997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:48.125 [2024-11-20 07:32:12.071774] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:48.125 [2024-11-20 07:32:12.071800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e94c8c0-f203-44d7-914b-d7ad4a7525b4 00:34:48.125 [2024-11-20 07:32:12.071833] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:48.125 [2024-11-20 07:32:12.071855] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:48.125 [2024-11-20 07:32:12.071866] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:48.125 [2024-11-20 07:32:12.071882] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:48.125 [2024-11-20 07:32:12.071892] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:48.125 [2024-11-20 07:32:12.071907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:48.125 [2024-11-20 07:32:12.071918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:48.125 [2024-11-20 07:32:12.071932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:48.126 [2024-11-20 07:32:12.071941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:48.126 [2024-11-20 07:32:12.071955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.126 [2024-11-20 07:32:12.071966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:48.126 [2024-11-20 07:32:12.071982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:34:48.126 [2024-11-20 07:32:12.071992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.126 [2024-11-20 07:32:12.093325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.126 [2024-11-20 07:32:12.093358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:48.126 [2024-11-20 07:32:12.093381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.294 ms 00:34:48.126 [2024-11-20 07:32:12.093392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.126 [2024-11-20 07:32:12.093972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:48.126 [2024-11-20 07:32:12.093995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:48.126 [2024-11-20 07:32:12.094014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:34:48.126 [2024-11-20 07:32:12.094030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.126 [2024-11-20 07:32:12.166399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.126 [2024-11-20 07:32:12.166450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:48.126 [2024-11-20 07:32:12.166470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.126 [2024-11-20 07:32:12.166482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.126 [2024-11-20 07:32:12.166632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.126 [2024-11-20 07:32:12.166647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:48.126 [2024-11-20 07:32:12.166663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.126 [2024-11-20 07:32:12.166680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.126 [2024-11-20 07:32:12.166743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.126 [2024-11-20 07:32:12.166757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:48.126 [2024-11-20 07:32:12.166778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.126 [2024-11-20 07:32:12.166789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.126 [2024-11-20 07:32:12.166828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.126 [2024-11-20 07:32:12.166840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:48.126 [2024-11-20 07:32:12.166856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.126 [2024-11-20 07:32:12.166866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.126 [2024-11-20 07:32:12.297926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.126 [2024-11-20 07:32:12.297978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:48.126 [2024-11-20 07:32:12.298001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.126 [2024-11-20 07:32:12.298012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.384 [2024-11-20 07:32:12.407049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.384 [2024-11-20 07:32:12.407102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:48.384 [2024-11-20 07:32:12.407123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.384 [2024-11-20 07:32:12.407140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.384 [2024-11-20 07:32:12.407265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.384 [2024-11-20 07:32:12.407278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:48.384 [2024-11-20 07:32:12.407300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.384 [2024-11-20 07:32:12.407310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.384 [2024-11-20 07:32:12.407346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.384 [2024-11-20 07:32:12.407358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:48.384 [2024-11-20 07:32:12.407375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.384 [2024-11-20 07:32:12.407385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.384 [2024-11-20 07:32:12.407533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.384 [2024-11-20 07:32:12.407546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:48.384 [2024-11-20 07:32:12.407562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.384 [2024-11-20 07:32:12.407572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.384 [2024-11-20 07:32:12.407617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.384 [2024-11-20 07:32:12.407630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:48.384 [2024-11-20 07:32:12.407645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.384 [2024-11-20 07:32:12.407656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.384 [2024-11-20 07:32:12.407702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.384 [2024-11-20 07:32:12.407720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:48.384 [2024-11-20 07:32:12.407740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.384 [2024-11-20 07:32:12.407751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.384 [2024-11-20 07:32:12.407803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:48.384 [2024-11-20 07:32:12.407832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:48.384 [2024-11-20 07:32:12.407849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:48.384 [2024-11-20 07:32:12.407860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:48.384 [2024-11-20 07:32:12.408013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.696 ms, result 0 00:34:49.317 07:32:13 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:49.575 [2024-11-20 07:32:13.536626] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:34:49.575 [2024-11-20 07:32:13.536750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76879 ] 00:34:49.575 [2024-11-20 07:32:13.705678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.832 [2024-11-20 07:32:13.822598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.090 [2024-11-20 07:32:14.190373] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:50.090 [2024-11-20 07:32:14.190448] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:50.349 [2024-11-20 07:32:14.354096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.354173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:50.349 [2024-11-20 07:32:14.354190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:50.349 [2024-11-20 07:32:14.354202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.357483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.357523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:50.349 [2024-11-20 07:32:14.357536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:34:50.349 [2024-11-20 07:32:14.357546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.357669] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:50.349 [2024-11-20 07:32:14.358705] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:50.349 [2024-11-20 07:32:14.358735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.358746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:50.349 [2024-11-20 07:32:14.358758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:34:50.349 [2024-11-20 07:32:14.358768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.360295] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:50.349 [2024-11-20 07:32:14.380638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.380705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:50.349 [2024-11-20 07:32:14.380722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.343 ms 00:34:50.349 [2024-11-20 07:32:14.380733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.380861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.380877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:50.349 [2024-11-20 07:32:14.380889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:34:50.349 [2024-11-20 07:32:14.380899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.387832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.387865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:50.349 [2024-11-20 07:32:14.387877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.886 ms 00:34:50.349 [2024-11-20 07:32:14.387888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.387994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.388009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:50.349 [2024-11-20 07:32:14.388021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:34:50.349 [2024-11-20 07:32:14.388031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.388061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.388076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:50.349 [2024-11-20 07:32:14.388087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:50.349 [2024-11-20 07:32:14.388097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.388125] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:34:50.349 [2024-11-20 07:32:14.393078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.393112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:50.349 [2024-11-20 07:32:14.393124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.962 ms 00:34:50.349 [2024-11-20 07:32:14.393134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.393207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.393219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:50.349 [2024-11-20 07:32:14.393231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:50.349 [2024-11-20 07:32:14.393241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.393265] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:50.349 [2024-11-20 07:32:14.393291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:50.349 [2024-11-20 07:32:14.393327] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:50.349 [2024-11-20 07:32:14.393345] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:50.349 [2024-11-20 07:32:14.393439] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:50.349 [2024-11-20 07:32:14.393452] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:50.349 [2024-11-20 07:32:14.393466] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:50.349 [2024-11-20 07:32:14.393480] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:50.349 [2024-11-20 07:32:14.393495] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:50.349 [2024-11-20 07:32:14.393506] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:34:50.349 [2024-11-20 07:32:14.393517] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:50.349 [2024-11-20 07:32:14.393527] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:50.349 [2024-11-20 07:32:14.393537] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:50.349 [2024-11-20 07:32:14.393547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.393558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:50.349 [2024-11-20 07:32:14.393568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:34:50.349 [2024-11-20 07:32:14.393578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.393656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.349 [2024-11-20 07:32:14.393667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:50.349 [2024-11-20 07:32:14.393681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:50.349 [2024-11-20 07:32:14.393691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.349 [2024-11-20 07:32:14.393786] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:50.349 [2024-11-20 07:32:14.393800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:50.349 [2024-11-20 07:32:14.393811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:50.349 [2024-11-20 07:32:14.393834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:50.349 [2024-11-20 07:32:14.393845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:50.350 [2024-11-20 07:32:14.393855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:50.350 [2024-11-20 07:32:14.393864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:34:50.350 [2024-11-20 07:32:14.393874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:50.350 [2024-11-20 07:32:14.393886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:34:50.350 [2024-11-20 07:32:14.393895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:50.350 [2024-11-20 07:32:14.393905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:50.350 [2024-11-20 07:32:14.393914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:34:50.350 [2024-11-20 07:32:14.393923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:50.350 [2024-11-20 07:32:14.393944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:50.350 [2024-11-20 07:32:14.393953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:34:50.350 [2024-11-20 07:32:14.393963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:50.350 [2024-11-20 07:32:14.393973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:50.350 [2024-11-20 07:32:14.393982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:34:50.350 [2024-11-20 07:32:14.393991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:50.350 [2024-11-20 07:32:14.394010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:50.350 [2024-11-20 07:32:14.394029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:50.350 [2024-11-20 07:32:14.394038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:50.350 [2024-11-20 07:32:14.394056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:50.350 [2024-11-20 07:32:14.394065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:50.350 [2024-11-20 07:32:14.394083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:50.350 [2024-11-20 07:32:14.394092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:50.350 [2024-11-20 07:32:14.394120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:50.350 [2024-11-20 07:32:14.394129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:50.350 [2024-11-20 07:32:14.394147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:50.350 [2024-11-20 07:32:14.394157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:34:50.350 [2024-11-20 07:32:14.394166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:50.350 [2024-11-20 07:32:14.394175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:50.350 [2024-11-20 07:32:14.394184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:34:50.350 [2024-11-20 07:32:14.394193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:50.350 [2024-11-20 07:32:14.394213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:34:50.350 [2024-11-20 07:32:14.394222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394231] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:50.350 [2024-11-20 07:32:14.394242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:50.350 [2024-11-20 07:32:14.394252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:50.350 [2024-11-20 07:32:14.394266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:50.350 [2024-11-20 07:32:14.394276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:50.350 [2024-11-20 07:32:14.394286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:50.350 [2024-11-20 07:32:14.394296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:50.350 [2024-11-20 07:32:14.394305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:50.350 [2024-11-20 07:32:14.394314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:50.350 [2024-11-20 07:32:14.394324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:50.350 [2024-11-20 07:32:14.394334] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:50.350 [2024-11-20 07:32:14.394346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:50.350 [2024-11-20 07:32:14.394358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:34:50.350 [2024-11-20 07:32:14.394368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:34:50.350 [2024-11-20 07:32:14.394379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:34:50.350 [2024-11-20 07:32:14.394389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:34:50.350 [2024-11-20 07:32:14.394400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:34:50.350 [2024-11-20 07:32:14.394410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:34:50.350 [2024-11-20 07:32:14.394420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:34:50.350 [2024-11-20 07:32:14.394431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:34:50.350 [2024-11-20 07:32:14.394441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:34:50.350 [2024-11-20 07:32:14.394451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:34:50.350 [2024-11-20 07:32:14.394462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:34:50.350 [2024-11-20 07:32:14.394472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:34:50.350 [2024-11-20 07:32:14.394482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:34:50.350 [2024-11-20 07:32:14.394492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:34:50.350 [2024-11-20 07:32:14.394502] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:50.350 [2024-11-20 07:32:14.394513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:50.350 [2024-11-20 07:32:14.394524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:50.350 [2024-11-20 07:32:14.394536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:50.350 [2024-11-20 07:32:14.394546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:50.350 [2024-11-20 07:32:14.394556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:50.350 [2024-11-20 07:32:14.394567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.350 [2024-11-20 07:32:14.394577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:50.350 [2024-11-20 07:32:14.394592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:34:50.350 [2024-11-20 07:32:14.394601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.350 [2024-11-20 07:32:14.436586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.350 [2024-11-20 07:32:14.436653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:50.350 [2024-11-20 07:32:14.436669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.922 ms 00:34:50.351 [2024-11-20 07:32:14.436684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.351 [2024-11-20 07:32:14.436900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.351 [2024-11-20 07:32:14.436924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:50.351 [2024-11-20 07:32:14.436936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:34:50.351 [2024-11-20 07:32:14.436946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.351 [2024-11-20 07:32:14.500489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.351 [2024-11-20 07:32:14.500549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:50.351 [2024-11-20 07:32:14.500566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.515 ms 00:34:50.351 [2024-11-20 07:32:14.500581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.351 [2024-11-20 07:32:14.500746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.351 [2024-11-20 07:32:14.500761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:50.351 [2024-11-20 07:32:14.500773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:50.351 [2024-11-20 07:32:14.500783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.351 [2024-11-20 07:32:14.501251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.351 [2024-11-20 07:32:14.501275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:50.351 [2024-11-20 07:32:14.501287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:34:50.351 [2024-11-20 07:32:14.501306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.351 [2024-11-20 07:32:14.501438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.351 [2024-11-20 07:32:14.501457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:50.351 [2024-11-20 07:32:14.501469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:34:50.351 [2024-11-20 07:32:14.501479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.351 [2024-11-20 07:32:14.522889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.351 [2024-11-20 07:32:14.522958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:50.351 [2024-11-20 07:32:14.522975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.380 ms 00:34:50.351 [2024-11-20 07:32:14.522986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.351 [2024-11-20 07:32:14.543863] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:50.351 [2024-11-20 07:32:14.543932] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:50.351 [2024-11-20 07:32:14.543951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.351 [2024-11-20 07:32:14.543962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:50.351 [2024-11-20 07:32:14.543976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.801 ms 00:34:50.351 [2024-11-20 07:32:14.543986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.610 [2024-11-20 07:32:14.576641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.610 [2024-11-20 07:32:14.576768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:50.610 [2024-11-20 07:32:14.576787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.503 ms 00:34:50.610 [2024-11-20 07:32:14.576800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.610 [2024-11-20 07:32:14.597164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.610 [2024-11-20 07:32:14.597237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:50.610 [2024-11-20 07:32:14.597254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.175 ms 00:34:50.610 [2024-11-20 07:32:14.597265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.610 [2024-11-20 07:32:14.619346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.610 [2024-11-20 07:32:14.619444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:50.610 [2024-11-20 07:32:14.619461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.939 ms 00:34:50.610 [2024-11-20 07:32:14.619474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.610 [2024-11-20 07:32:14.620460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.610 [2024-11-20 07:32:14.620497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:50.610 [2024-11-20 07:32:14.620512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:34:50.611 [2024-11-20 07:32:14.620525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.716798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.611 [2024-11-20 07:32:14.716873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:50.611 [2024-11-20 07:32:14.716890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.232 ms 00:34:50.611 [2024-11-20 07:32:14.716902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.729854] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:34:50.611 [2024-11-20 07:32:14.747001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.611 [2024-11-20 07:32:14.747063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:50.611 [2024-11-20 07:32:14.747081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.928 ms 00:34:50.611 [2024-11-20 07:32:14.747093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.747229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.611 [2024-11-20 07:32:14.747245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:50.611 [2024-11-20 07:32:14.747273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:50.611 [2024-11-20 07:32:14.747285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.747347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.611 [2024-11-20 07:32:14.747360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:50.611 [2024-11-20 07:32:14.747372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:50.611 [2024-11-20 07:32:14.747383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.747416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.611 [2024-11-20 07:32:14.747433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:50.611 [2024-11-20 07:32:14.747444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:50.611 [2024-11-20 07:32:14.747455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.747492] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:50.611 [2024-11-20 07:32:14.747505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.611 [2024-11-20 07:32:14.747516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:50.611 [2024-11-20 07:32:14.747527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:50.611 [2024-11-20 07:32:14.747537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.785751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.611 [2024-11-20 07:32:14.785808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:50.611 [2024-11-20 07:32:14.785831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.184 ms 00:34:50.611 [2024-11-20 07:32:14.785843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.785976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.611 [2024-11-20 07:32:14.785990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:50.611 [2024-11-20 07:32:14.786002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:34:50.611 [2024-11-20 07:32:14.786012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.611 [2024-11-20 07:32:14.787126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:50.611 [2024-11-20 07:32:14.791829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 432.647 ms, result 0 00:34:50.611 [2024-11-20 07:32:14.792718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:50.869 [2024-11-20 07:32:14.812067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:51.803  [2024-11-20T07:32:16.941Z] Copying: 32/256 [MB] (32 MBps) [2024-11-20T07:32:18.315Z] Copying: 61/256 [MB] (29 MBps) [2024-11-20T07:32:18.906Z] Copying: 90/256 [MB] (28 MBps) [2024-11-20T07:32:20.280Z] Copying: 117/256 [MB] (27 MBps) [2024-11-20T07:32:21.215Z] Copying: 146/256 [MB] (28 MBps) [2024-11-20T07:32:22.151Z] Copying: 174/256 [MB] (28 MBps) [2024-11-20T07:32:23.087Z] Copying: 202/256 [MB] (28 MBps) [2024-11-20T07:32:24.023Z] Copying: 232/256 [MB] (29 MBps) [2024-11-20T07:32:24.023Z] Copying: 256/256 [MB] (average 29 MBps)[2024-11-20 07:32:24.010487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:00.079 [2024-11-20 07:32:24.026608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.026666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:00.079 [2024-11-20 07:32:24.026683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:00.079 [2024-11-20 07:32:24.026702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.026729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:35:00.079 [2024-11-20 07:32:24.031288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.031329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:00.079 [2024-11-20 07:32:24.031342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.539 ms 00:35:00.079 [2024-11-20 07:32:24.031353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.031610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.031628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:00.079 [2024-11-20 07:32:24.031639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:35:00.079 [2024-11-20 07:32:24.031650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.034940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.034980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:00.079 [2024-11-20 07:32:24.035015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:35:00.079 [2024-11-20 07:32:24.035026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.043406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.043472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:00.079 [2024-11-20 07:32:24.043506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.349 ms 00:35:00.079 [2024-11-20 07:32:24.043528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.085586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.085634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:00.079 [2024-11-20 07:32:24.085649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.927 ms 00:35:00.079 [2024-11-20 07:32:24.085684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.109929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.109991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:00.079 [2024-11-20 07:32:24.110011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.168 ms 00:35:00.079 [2024-11-20 07:32:24.110035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.110253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.110275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:00.079 [2024-11-20 07:32:24.110292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:35:00.079 [2024-11-20 07:32:24.110308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.153713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.153766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:00.079 [2024-11-20 07:32:24.153786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.334 ms 00:35:00.079 [2024-11-20 07:32:24.153801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.197709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.197768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:00.079 [2024-11-20 07:32:24.197788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.801 ms 00:35:00.079 [2024-11-20 07:32:24.197803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.079 [2024-11-20 07:32:24.239505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.079 [2024-11-20 07:32:24.239563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:00.079 [2024-11-20 07:32:24.239580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.613 ms 00:35:00.079 [2024-11-20 07:32:24.239591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.343 [2024-11-20 07:32:24.281644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.343 [2024-11-20 07:32:24.281701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:00.343 [2024-11-20 07:32:24.281721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.934 ms 00:35:00.343 [2024-11-20 07:32:24.281737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.343 [2024-11-20 07:32:24.281825] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:00.343 [2024-11-20 07:32:24.281851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.281870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.281886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.281903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.281920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.281937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.281953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.281973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.281992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:00.343 [2024-11-20 07:32:24.282289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.282991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:00.344 [2024-11-20 07:32:24.283267] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:00.344 [2024-11-20 07:32:24.283280] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e94c8c0-f203-44d7-914b-d7ad4a7525b4 00:35:00.344 [2024-11-20 07:32:24.283294] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:00.344 [2024-11-20 07:32:24.283306] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:00.344 [2024-11-20 07:32:24.283318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:00.344 [2024-11-20 07:32:24.283330] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:00.344 [2024-11-20 07:32:24.283348] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:00.344 [2024-11-20 07:32:24.283361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:00.344 [2024-11-20 07:32:24.283373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:00.344 [2024-11-20 07:32:24.283383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:00.344 [2024-11-20 07:32:24.283394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:00.344 [2024-11-20 07:32:24.283406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.344 [2024-11-20 07:32:24.283425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:00.344 [2024-11-20 07:32:24.283438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:35:00.344 [2024-11-20 07:32:24.283451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.344 [2024-11-20 07:32:24.306677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.344 [2024-11-20 07:32:24.306741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:00.344 [2024-11-20 07:32:24.306772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.199 ms 00:35:00.344 [2024-11-20 07:32:24.306785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.344 [2024-11-20 07:32:24.307397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.344 [2024-11-20 07:32:24.307425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:00.344 [2024-11-20 07:32:24.307439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:35:00.344 [2024-11-20 07:32:24.307451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.344 [2024-11-20 07:32:24.370785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.344 [2024-11-20 07:32:24.370846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:00.344 [2024-11-20 07:32:24.370863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.344 [2024-11-20 07:32:24.370876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.344 [2024-11-20 07:32:24.371015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.344 [2024-11-20 07:32:24.371029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:00.344 [2024-11-20 07:32:24.371041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.344 [2024-11-20 07:32:24.371053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.344 [2024-11-20 07:32:24.371124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.344 [2024-11-20 07:32:24.371140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:00.344 [2024-11-20 07:32:24.371152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.344 [2024-11-20 07:32:24.371164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.344 [2024-11-20 07:32:24.371185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.344 [2024-11-20 07:32:24.371203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:00.344 [2024-11-20 07:32:24.371215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.344 [2024-11-20 07:32:24.371227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.344 [2024-11-20 07:32:24.516250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.344 [2024-11-20 07:32:24.516317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:00.344 [2024-11-20 07:32:24.516334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.344 [2024-11-20 07:32:24.516347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.603 [2024-11-20 07:32:24.633892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.603 [2024-11-20 07:32:24.633969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:00.603 [2024-11-20 07:32:24.633989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.603 [2024-11-20 07:32:24.634005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.603 [2024-11-20 07:32:24.634117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.603 [2024-11-20 07:32:24.634134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:00.603 [2024-11-20 07:32:24.634160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.603 [2024-11-20 07:32:24.634172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.603 [2024-11-20 07:32:24.634205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.603 [2024-11-20 07:32:24.634217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:00.603 [2024-11-20 07:32:24.634234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.603 [2024-11-20 07:32:24.634246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.603 [2024-11-20 07:32:24.634405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.603 [2024-11-20 07:32:24.634420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:00.603 [2024-11-20 07:32:24.634433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.603 [2024-11-20 07:32:24.634445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.603 [2024-11-20 07:32:24.634489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.603 [2024-11-20 07:32:24.634503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:00.603 [2024-11-20 07:32:24.634516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.603 [2024-11-20 07:32:24.634533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.603 [2024-11-20 07:32:24.634577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.603 [2024-11-20 07:32:24.634590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:00.603 [2024-11-20 07:32:24.634606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.603 [2024-11-20 07:32:24.634627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.603 [2024-11-20 07:32:24.634683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.603 [2024-11-20 07:32:24.634698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:00.603 [2024-11-20 07:32:24.634715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.603 [2024-11-20 07:32:24.634727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.603 [2024-11-20 07:32:24.634900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 608.291 ms, result 0 00:35:01.978 00:35:01.978 00:35:01.978 07:32:25 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:02.280 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:35:02.280 07:32:26 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:35:02.280 07:32:26 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:35:02.280 07:32:26 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:02.280 07:32:26 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:02.280 07:32:26 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:35:02.280 07:32:26 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:35:02.539 07:32:26 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76811 00:35:02.539 07:32:26 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76811 ']' 00:35:02.539 Process with pid 76811 is not found 00:35:02.539 07:32:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76811 00:35:02.539 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76811) - No such process 00:35:02.539 07:32:26 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76811 is not found' 00:35:02.539 00:35:02.539 real 1m7.373s 00:35:02.539 user 1m33.810s 00:35:02.539 sys 0m7.386s 00:35:02.539 ************************************ 00:35:02.539 END TEST ftl_trim 00:35:02.539 ************************************ 00:35:02.539 07:32:26 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.539 07:32:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:35:02.539 07:32:26 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:35:02.539 07:32:26 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:02.539 07:32:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.539 07:32:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:02.539 ************************************ 00:35:02.539 START TEST ftl_restore 00:35:02.539 ************************************ 00:35:02.539 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:35:02.539 * Looking for test storage... 00:35:02.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:35:02.539 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:02.539 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:35:02.539 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:02.798 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:02.798 07:32:26 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.799 07:32:26 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:02.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.799 --rc genhtml_branch_coverage=1 00:35:02.799 --rc genhtml_function_coverage=1 00:35:02.799 --rc genhtml_legend=1 00:35:02.799 --rc geninfo_all_blocks=1 00:35:02.799 --rc geninfo_unexecuted_blocks=1 00:35:02.799 00:35:02.799 ' 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:02.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.799 --rc genhtml_branch_coverage=1 00:35:02.799 --rc genhtml_function_coverage=1 00:35:02.799 --rc genhtml_legend=1 00:35:02.799 --rc geninfo_all_blocks=1 00:35:02.799 --rc geninfo_unexecuted_blocks=1 00:35:02.799 00:35:02.799 ' 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:02.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.799 --rc genhtml_branch_coverage=1 00:35:02.799 --rc genhtml_function_coverage=1 00:35:02.799 --rc genhtml_legend=1 00:35:02.799 --rc geninfo_all_blocks=1 00:35:02.799 --rc geninfo_unexecuted_blocks=1 00:35:02.799 00:35:02.799 ' 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:02.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.799 --rc genhtml_branch_coverage=1 00:35:02.799 --rc genhtml_function_coverage=1 00:35:02.799 --rc genhtml_legend=1 00:35:02.799 --rc geninfo_all_blocks=1 00:35:02.799 --rc geninfo_unexecuted_blocks=1 00:35:02.799 00:35:02.799 ' 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.aKL5GsQCER 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77083 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77083 00:35:02.799 07:32:26 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77083 ']' 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.799 07:32:26 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:35:02.799 [2024-11-20 07:32:26.998306] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:35:02.799 [2024-11-20 07:32:26.998483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77083 ] 00:35:03.058 [2024-11-20 07:32:27.203155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.317 [2024-11-20 07:32:27.385198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.251 07:32:28 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.251 07:32:28 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:35:04.251 07:32:28 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:35:04.251 07:32:28 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:35:04.251 07:32:28 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:35:04.251 07:32:28 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:35:04.251 07:32:28 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:35:04.251 07:32:28 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:04.816 07:32:28 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:35:04.816 07:32:28 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:35:04.816 07:32:28 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:35:04.816 07:32:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:35:04.816 07:32:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:04.816 07:32:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:35:04.816 07:32:28 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:35:04.816 07:32:28 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:35:05.075 07:32:29 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:05.075 { 00:35:05.075 "name": "nvme0n1", 00:35:05.075 "aliases": [ 00:35:05.075 "0fdcf0ed-10a5-4541-a94c-d0600213b571" 00:35:05.075 ], 00:35:05.075 "product_name": "NVMe disk", 00:35:05.075 "block_size": 4096, 00:35:05.075 "num_blocks": 1310720, 00:35:05.075 "uuid": "0fdcf0ed-10a5-4541-a94c-d0600213b571", 00:35:05.075 "numa_id": -1, 00:35:05.075 "assigned_rate_limits": { 00:35:05.075 "rw_ios_per_sec": 0, 00:35:05.075 "rw_mbytes_per_sec": 0, 00:35:05.075 "r_mbytes_per_sec": 0, 00:35:05.075 "w_mbytes_per_sec": 0 00:35:05.075 }, 00:35:05.075 "claimed": true, 00:35:05.075 "claim_type": "read_many_write_one", 00:35:05.075 "zoned": false, 00:35:05.075 "supported_io_types": { 00:35:05.075 "read": true, 00:35:05.075 "write": true, 00:35:05.075 "unmap": true, 00:35:05.075 "flush": true, 00:35:05.075 "reset": true, 00:35:05.075 "nvme_admin": true, 00:35:05.075 "nvme_io": true, 00:35:05.075 "nvme_io_md": false, 00:35:05.075 "write_zeroes": true, 00:35:05.075 "zcopy": false, 00:35:05.075 "get_zone_info": false, 00:35:05.075 "zone_management": false, 00:35:05.075 "zone_append": false, 00:35:05.075 "compare": true, 00:35:05.075 "compare_and_write": false, 00:35:05.075 "abort": true, 00:35:05.075 "seek_hole": false, 00:35:05.075 "seek_data": false, 00:35:05.075 "copy": true, 00:35:05.075 "nvme_iov_md": false 00:35:05.075 }, 00:35:05.075 "driver_specific": { 00:35:05.075 "nvme": [ 00:35:05.075 { 00:35:05.075 "pci_address": "0000:00:11.0", 00:35:05.075 "trid": { 00:35:05.075 "trtype": "PCIe", 00:35:05.075 "traddr": "0000:00:11.0" 00:35:05.075 }, 00:35:05.075 "ctrlr_data": { 00:35:05.075 "cntlid": 0, 00:35:05.075 "vendor_id": "0x1b36", 00:35:05.075 "model_number": "QEMU NVMe Ctrl", 00:35:05.075 "serial_number": "12341", 00:35:05.075 "firmware_revision": "8.0.0", 00:35:05.075 "subnqn": "nqn.2019-08.org.qemu:12341", 00:35:05.075 "oacs": { 00:35:05.075 "security": 0, 00:35:05.075 "format": 1, 00:35:05.075 "firmware": 0, 00:35:05.075 "ns_manage": 1 00:35:05.075 }, 00:35:05.075 "multi_ctrlr": false, 00:35:05.075 "ana_reporting": false 00:35:05.075 }, 00:35:05.075 "vs": { 00:35:05.075 "nvme_version": "1.4" 00:35:05.075 }, 00:35:05.075 "ns_data": { 00:35:05.075 "id": 1, 00:35:05.075 "can_share": false 00:35:05.075 } 00:35:05.075 } 00:35:05.075 ], 00:35:05.075 "mp_policy": "active_passive" 00:35:05.075 } 00:35:05.075 } 00:35:05.075 ]' 00:35:05.075 07:32:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:05.075 07:32:29 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:35:05.075 07:32:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:05.075 07:32:29 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:35:05.075 07:32:29 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:35:05.075 07:32:29 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:35:05.075 07:32:29 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:35:05.075 07:32:29 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:35:05.075 07:32:29 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:35:05.075 07:32:29 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:05.075 07:32:29 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:05.333 07:32:29 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a745766a-db3e-44eb-ae08-14f72e5ecd14 00:35:05.333 07:32:29 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:35:05.333 07:32:29 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a745766a-db3e-44eb-ae08-14f72e5ecd14 00:35:05.899 07:32:29 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:35:05.899 07:32:30 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=0a7b021e-9f1d-4924-bb47-fb0137ac838a 00:35:05.899 07:32:30 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0a7b021e-9f1d-4924-bb47-fb0137ac838a 00:35:06.466 07:32:30 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:06.466 07:32:30 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:35:06.466 07:32:30 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:06.466 07:32:30 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:35:06.466 07:32:30 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:35:06.466 07:32:30 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:06.466 07:32:30 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:35:06.466 07:32:30 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:06.466 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:06.466 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:06.466 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:35:06.466 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:35:06.466 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:06.725 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:06.725 { 00:35:06.725 "name": "a68da390-a3c6-49a9-9077-7c57cea8e387", 00:35:06.725 "aliases": [ 00:35:06.725 "lvs/nvme0n1p0" 00:35:06.725 ], 00:35:06.725 "product_name": "Logical Volume", 00:35:06.725 "block_size": 4096, 00:35:06.725 "num_blocks": 26476544, 00:35:06.725 "uuid": "a68da390-a3c6-49a9-9077-7c57cea8e387", 00:35:06.725 "assigned_rate_limits": { 00:35:06.725 "rw_ios_per_sec": 0, 00:35:06.725 "rw_mbytes_per_sec": 0, 00:35:06.725 "r_mbytes_per_sec": 0, 00:35:06.725 "w_mbytes_per_sec": 0 00:35:06.725 }, 00:35:06.725 "claimed": false, 00:35:06.725 "zoned": false, 00:35:06.725 "supported_io_types": { 00:35:06.725 "read": true, 00:35:06.725 "write": true, 00:35:06.725 "unmap": true, 00:35:06.725 "flush": false, 00:35:06.725 "reset": true, 00:35:06.725 "nvme_admin": false, 00:35:06.725 "nvme_io": false, 00:35:06.725 "nvme_io_md": false, 00:35:06.725 "write_zeroes": true, 00:35:06.725 "zcopy": false, 00:35:06.725 "get_zone_info": false, 00:35:06.725 "zone_management": false, 00:35:06.725 "zone_append": false, 00:35:06.725 "compare": false, 00:35:06.725 "compare_and_write": false, 00:35:06.725 "abort": false, 00:35:06.725 "seek_hole": true, 00:35:06.725 "seek_data": true, 00:35:06.725 "copy": false, 00:35:06.725 "nvme_iov_md": false 00:35:06.725 }, 00:35:06.725 "driver_specific": { 00:35:06.725 "lvol": { 00:35:06.725 "lvol_store_uuid": "0a7b021e-9f1d-4924-bb47-fb0137ac838a", 00:35:06.725 "base_bdev": "nvme0n1", 00:35:06.725 "thin_provision": true, 00:35:06.725 "num_allocated_clusters": 0, 00:35:06.725 "snapshot": false, 00:35:06.725 "clone": false, 00:35:06.725 "esnap_clone": false 00:35:06.725 } 00:35:06.725 } 00:35:06.725 } 00:35:06.725 ]' 00:35:06.725 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:06.725 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:35:06.725 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:06.725 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:35:06.725 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:35:06.725 07:32:30 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:35:06.725 07:32:30 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:35:06.725 07:32:30 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:35:06.725 07:32:30 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:35:07.290 07:32:31 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:35:07.290 07:32:31 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:35:07.291 07:32:31 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:07.291 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:07.291 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:07.291 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:35:07.291 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:35:07.291 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:07.291 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:07.291 { 00:35:07.291 "name": "a68da390-a3c6-49a9-9077-7c57cea8e387", 00:35:07.291 "aliases": [ 00:35:07.291 "lvs/nvme0n1p0" 00:35:07.291 ], 00:35:07.291 "product_name": "Logical Volume", 00:35:07.291 "block_size": 4096, 00:35:07.291 "num_blocks": 26476544, 00:35:07.291 "uuid": "a68da390-a3c6-49a9-9077-7c57cea8e387", 00:35:07.291 "assigned_rate_limits": { 00:35:07.291 "rw_ios_per_sec": 0, 00:35:07.291 "rw_mbytes_per_sec": 0, 00:35:07.291 "r_mbytes_per_sec": 0, 00:35:07.291 "w_mbytes_per_sec": 0 00:35:07.291 }, 00:35:07.291 "claimed": false, 00:35:07.291 "zoned": false, 00:35:07.291 "supported_io_types": { 00:35:07.291 "read": true, 00:35:07.291 "write": true, 00:35:07.291 "unmap": true, 00:35:07.291 "flush": false, 00:35:07.291 "reset": true, 00:35:07.291 "nvme_admin": false, 00:35:07.291 "nvme_io": false, 00:35:07.291 "nvme_io_md": false, 00:35:07.291 "write_zeroes": true, 00:35:07.291 "zcopy": false, 00:35:07.291 "get_zone_info": false, 00:35:07.291 "zone_management": false, 00:35:07.291 "zone_append": false, 00:35:07.291 "compare": false, 00:35:07.291 "compare_and_write": false, 00:35:07.291 "abort": false, 00:35:07.291 "seek_hole": true, 00:35:07.291 "seek_data": true, 00:35:07.291 "copy": false, 00:35:07.291 "nvme_iov_md": false 00:35:07.291 }, 00:35:07.291 "driver_specific": { 00:35:07.291 "lvol": { 00:35:07.291 "lvol_store_uuid": "0a7b021e-9f1d-4924-bb47-fb0137ac838a", 00:35:07.291 "base_bdev": "nvme0n1", 00:35:07.291 "thin_provision": true, 00:35:07.291 "num_allocated_clusters": 0, 00:35:07.291 "snapshot": false, 00:35:07.291 "clone": false, 00:35:07.291 "esnap_clone": false 00:35:07.291 } 00:35:07.291 } 00:35:07.291 } 00:35:07.291 ]' 00:35:07.291 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:07.548 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:35:07.548 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:07.548 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:35:07.548 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:35:07.548 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:35:07.548 07:32:31 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:35:07.548 07:32:31 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:35:07.806 07:32:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:35:07.806 07:32:31 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:07.806 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:07.806 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:35:07.806 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:35:07.806 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:35:07.806 07:32:31 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a68da390-a3c6-49a9-9077-7c57cea8e387 00:35:08.064 07:32:32 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:35:08.064 { 00:35:08.064 "name": "a68da390-a3c6-49a9-9077-7c57cea8e387", 00:35:08.064 "aliases": [ 00:35:08.064 "lvs/nvme0n1p0" 00:35:08.064 ], 00:35:08.064 "product_name": "Logical Volume", 00:35:08.064 "block_size": 4096, 00:35:08.064 "num_blocks": 26476544, 00:35:08.064 "uuid": "a68da390-a3c6-49a9-9077-7c57cea8e387", 00:35:08.064 "assigned_rate_limits": { 00:35:08.064 "rw_ios_per_sec": 0, 00:35:08.064 "rw_mbytes_per_sec": 0, 00:35:08.064 "r_mbytes_per_sec": 0, 00:35:08.064 "w_mbytes_per_sec": 0 00:35:08.064 }, 00:35:08.064 "claimed": false, 00:35:08.064 "zoned": false, 00:35:08.064 "supported_io_types": { 00:35:08.064 "read": true, 00:35:08.064 "write": true, 00:35:08.064 "unmap": true, 00:35:08.064 "flush": false, 00:35:08.064 "reset": true, 00:35:08.064 "nvme_admin": false, 00:35:08.064 "nvme_io": false, 00:35:08.064 "nvme_io_md": false, 00:35:08.064 "write_zeroes": true, 00:35:08.064 "zcopy": false, 00:35:08.064 "get_zone_info": false, 00:35:08.064 "zone_management": false, 00:35:08.064 "zone_append": false, 00:35:08.064 "compare": false, 00:35:08.064 "compare_and_write": false, 00:35:08.064 "abort": false, 00:35:08.064 "seek_hole": true, 00:35:08.064 "seek_data": true, 00:35:08.064 "copy": false, 00:35:08.064 "nvme_iov_md": false 00:35:08.064 }, 00:35:08.064 "driver_specific": { 00:35:08.064 "lvol": { 00:35:08.064 "lvol_store_uuid": "0a7b021e-9f1d-4924-bb47-fb0137ac838a", 00:35:08.064 "base_bdev": "nvme0n1", 00:35:08.064 "thin_provision": true, 00:35:08.064 "num_allocated_clusters": 0, 00:35:08.064 "snapshot": false, 00:35:08.064 "clone": false, 00:35:08.064 "esnap_clone": false 00:35:08.064 } 00:35:08.064 } 00:35:08.064 } 00:35:08.064 ]' 00:35:08.064 07:32:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:35:08.064 07:32:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:35:08.064 07:32:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:35:08.064 07:32:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:35:08.064 07:32:32 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:35:08.064 07:32:32 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:35:08.064 07:32:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:35:08.064 07:32:32 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a68da390-a3c6-49a9-9077-7c57cea8e387 --l2p_dram_limit 10' 00:35:08.064 07:32:32 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:35:08.064 07:32:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:35:08.065 07:32:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:35:08.065 07:32:32 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:35:08.065 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:35:08.065 07:32:32 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a68da390-a3c6-49a9-9077-7c57cea8e387 --l2p_dram_limit 10 -c nvc0n1p0 00:35:08.631 [2024-11-20 07:32:32.561383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.561646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:08.631 [2024-11-20 07:32:32.561775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:08.631 [2024-11-20 07:32:32.561853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.561991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.562125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:08.631 [2024-11-20 07:32:32.562240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:35:08.631 [2024-11-20 07:32:32.562278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.562349] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:08.631 [2024-11-20 07:32:32.563713] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:08.631 [2024-11-20 07:32:32.563942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.564054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:08.631 [2024-11-20 07:32:32.564111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:35:08.631 [2024-11-20 07:32:32.564200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.564408] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5a6538a1-e573-4c8a-9e5f-4aff796a6df9 00:35:08.631 [2024-11-20 07:32:32.566130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.566303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:35:08.631 [2024-11-20 07:32:32.566405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:08.631 [2024-11-20 07:32:32.566453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.574365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.574558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:08.631 [2024-11-20 07:32:32.574672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.801 ms 00:35:08.631 [2024-11-20 07:32:32.574804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.574981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.575039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:08.631 [2024-11-20 07:32:32.575131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:35:08.631 [2024-11-20 07:32:32.575241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.575380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.575437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:08.631 [2024-11-20 07:32:32.575530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:08.631 [2024-11-20 07:32:32.575583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.575755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:08.631 [2024-11-20 07:32:32.582108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.582287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:08.631 [2024-11-20 07:32:32.582387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.364 ms 00:35:08.631 [2024-11-20 07:32:32.582481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.582562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.582579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:08.631 [2024-11-20 07:32:32.582596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:08.631 [2024-11-20 07:32:32.582608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.582663] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:35:08.631 [2024-11-20 07:32:32.582863] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:08.631 [2024-11-20 07:32:32.582890] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:08.631 [2024-11-20 07:32:32.582907] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:08.631 [2024-11-20 07:32:32.582926] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:08.631 [2024-11-20 07:32:32.582941] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:08.631 [2024-11-20 07:32:32.582958] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:08.631 [2024-11-20 07:32:32.582970] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:08.631 [2024-11-20 07:32:32.582989] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:08.631 [2024-11-20 07:32:32.583001] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:08.631 [2024-11-20 07:32:32.583017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.583029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:08.631 [2024-11-20 07:32:32.583045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:35:08.631 [2024-11-20 07:32:32.583069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.583172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.631 [2024-11-20 07:32:32.583186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:08.631 [2024-11-20 07:32:32.583202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:08.631 [2024-11-20 07:32:32.583213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.631 [2024-11-20 07:32:32.583332] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:08.631 [2024-11-20 07:32:32.583349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:08.631 [2024-11-20 07:32:32.583365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:08.631 [2024-11-20 07:32:32.583378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:08.631 [2024-11-20 07:32:32.583394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:08.631 [2024-11-20 07:32:32.583405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:08.631 [2024-11-20 07:32:32.583419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:08.631 [2024-11-20 07:32:32.583431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:08.631 [2024-11-20 07:32:32.583445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:08.631 [2024-11-20 07:32:32.583456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:08.631 [2024-11-20 07:32:32.583471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:08.631 [2024-11-20 07:32:32.583482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:08.631 [2024-11-20 07:32:32.583496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:08.631 [2024-11-20 07:32:32.583508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:08.631 [2024-11-20 07:32:32.583522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:08.632 [2024-11-20 07:32:32.583534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:08.632 [2024-11-20 07:32:32.583569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:08.632 [2024-11-20 07:32:32.583593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:08.632 [2024-11-20 07:32:32.583620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:08.632 [2024-11-20 07:32:32.583645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:08.632 [2024-11-20 07:32:32.583656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:08.632 [2024-11-20 07:32:32.583681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:08.632 [2024-11-20 07:32:32.583696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:08.632 [2024-11-20 07:32:32.583721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:08.632 [2024-11-20 07:32:32.583732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:08.632 [2024-11-20 07:32:32.583757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:08.632 [2024-11-20 07:32:32.583774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:08.632 [2024-11-20 07:32:32.583801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:08.632 [2024-11-20 07:32:32.583825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:08.632 [2024-11-20 07:32:32.583840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:08.632 [2024-11-20 07:32:32.583852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:08.632 [2024-11-20 07:32:32.583867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:08.632 [2024-11-20 07:32:32.583878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:08.632 [2024-11-20 07:32:32.583904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:08.632 [2024-11-20 07:32:32.583917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583928] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:08.632 [2024-11-20 07:32:32.583943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:08.632 [2024-11-20 07:32:32.583955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:08.632 [2024-11-20 07:32:32.583972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:08.632 [2024-11-20 07:32:32.583984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:08.632 [2024-11-20 07:32:32.584001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:08.632 [2024-11-20 07:32:32.584013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:08.632 [2024-11-20 07:32:32.584028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:08.632 [2024-11-20 07:32:32.584039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:08.632 [2024-11-20 07:32:32.584054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:08.632 [2024-11-20 07:32:32.584071] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:08.632 [2024-11-20 07:32:32.584093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:08.632 [2024-11-20 07:32:32.584113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:08.632 [2024-11-20 07:32:32.584129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:08.632 [2024-11-20 07:32:32.584142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:08.632 [2024-11-20 07:32:32.584157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:08.632 [2024-11-20 07:32:32.584170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:08.632 [2024-11-20 07:32:32.584185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:08.632 [2024-11-20 07:32:32.584198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:08.632 [2024-11-20 07:32:32.584213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:08.632 [2024-11-20 07:32:32.584226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:08.632 [2024-11-20 07:32:32.584244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:08.632 [2024-11-20 07:32:32.584257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:08.632 [2024-11-20 07:32:32.584273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:08.632 [2024-11-20 07:32:32.584286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:08.632 [2024-11-20 07:32:32.584303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:08.632 [2024-11-20 07:32:32.584316] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:08.632 [2024-11-20 07:32:32.584333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:08.632 [2024-11-20 07:32:32.584347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:08.632 [2024-11-20 07:32:32.584362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:08.632 [2024-11-20 07:32:32.584375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:08.632 [2024-11-20 07:32:32.584390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:08.632 [2024-11-20 07:32:32.584403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.632 [2024-11-20 07:32:32.584419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:08.632 [2024-11-20 07:32:32.584432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:35:08.632 [2024-11-20 07:32:32.584447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.632 [2024-11-20 07:32:32.584499] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:35:08.632 [2024-11-20 07:32:32.584527] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:35:11.914 [2024-11-20 07:32:35.615108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.615474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:35:11.914 [2024-11-20 07:32:35.615619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3030.588 ms 00:35:11.914 [2024-11-20 07:32:35.615688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.684745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.685122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:11.914 [2024-11-20 07:32:35.685287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.486 ms 00:35:11.914 [2024-11-20 07:32:35.685357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.685660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.685739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:11.914 [2024-11-20 07:32:35.685894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:35:11.914 [2024-11-20 07:32:35.686064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.760109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.760425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:11.914 [2024-11-20 07:32:35.760575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.839 ms 00:35:11.914 [2024-11-20 07:32:35.760642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.760760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.761055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:11.914 [2024-11-20 07:32:35.761123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:11.914 [2024-11-20 07:32:35.761251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.762331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.762513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:11.914 [2024-11-20 07:32:35.762628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:35:11.914 [2024-11-20 07:32:35.762738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.762977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.763053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:11.914 [2024-11-20 07:32:35.763175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:35:11.914 [2024-11-20 07:32:35.763245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.795043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.795278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:11.914 [2024-11-20 07:32:35.795383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.576 ms 00:35:11.914 [2024-11-20 07:32:35.795426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.811960] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:11.914 [2024-11-20 07:32:35.817614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.817783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:11.914 [2024-11-20 07:32:35.817830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.966 ms 00:35:11.914 [2024-11-20 07:32:35.817843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.909578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.909921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:35:11.914 [2024-11-20 07:32:35.909954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.653 ms 00:35:11.914 [2024-11-20 07:32:35.909980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.910247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.910271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:11.914 [2024-11-20 07:32:35.910292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:35:11.914 [2024-11-20 07:32:35.910303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.948251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.948304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:35:11.914 [2024-11-20 07:32:35.948326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.844 ms 00:35:11.914 [2024-11-20 07:32:35.948338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.986621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.986678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:35:11.914 [2024-11-20 07:32:35.986700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.221 ms 00:35:11.914 [2024-11-20 07:32:35.986712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.914 [2024-11-20 07:32:35.987654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.914 [2024-11-20 07:32:35.987683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:11.915 [2024-11-20 07:32:35.987700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:35:11.915 [2024-11-20 07:32:35.987713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:11.915 [2024-11-20 07:32:36.099117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:11.915 [2024-11-20 07:32:36.099389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:35:11.915 [2024-11-20 07:32:36.099429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.319 ms 00:35:11.915 [2024-11-20 07:32:36.099442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.173 [2024-11-20 07:32:36.140922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.173 [2024-11-20 07:32:36.141159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:35:12.173 [2024-11-20 07:32:36.141192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.346 ms 00:35:12.173 [2024-11-20 07:32:36.141204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.173 [2024-11-20 07:32:36.181960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.173 [2024-11-20 07:32:36.182022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:35:12.173 [2024-11-20 07:32:36.182044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.668 ms 00:35:12.173 [2024-11-20 07:32:36.182055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.173 [2024-11-20 07:32:36.221081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.173 [2024-11-20 07:32:36.221155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:12.173 [2024-11-20 07:32:36.221176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.968 ms 00:35:12.173 [2024-11-20 07:32:36.221187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.173 [2024-11-20 07:32:36.221244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.173 [2024-11-20 07:32:36.221257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:12.173 [2024-11-20 07:32:36.221278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:12.173 [2024-11-20 07:32:36.221289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.173 [2024-11-20 07:32:36.221423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.173 [2024-11-20 07:32:36.221437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:12.173 [2024-11-20 07:32:36.221456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:35:12.174 [2024-11-20 07:32:36.221466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.174 [2024-11-20 07:32:36.223107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3661.117 ms, result 0 00:35:12.174 { 00:35:12.174 "name": "ftl0", 00:35:12.174 "uuid": "5a6538a1-e573-4c8a-9e5f-4aff796a6df9" 00:35:12.174 } 00:35:12.174 07:32:36 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:35:12.174 07:32:36 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:35:12.432 07:32:36 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:35:12.432 07:32:36 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:35:12.690 [2024-11-20 07:32:36.722039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.690 [2024-11-20 07:32:36.722358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:12.690 [2024-11-20 07:32:36.722387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:12.690 [2024-11-20 07:32:36.722415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.690 [2024-11-20 07:32:36.722463] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:12.690 [2024-11-20 07:32:36.727518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.690 [2024-11-20 07:32:36.727553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:12.690 [2024-11-20 07:32:36.727570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:35:12.690 [2024-11-20 07:32:36.727582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.690 [2024-11-20 07:32:36.727903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.690 [2024-11-20 07:32:36.727920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:12.690 [2024-11-20 07:32:36.727941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:35:12.690 [2024-11-20 07:32:36.727952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.690 [2024-11-20 07:32:36.730549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.690 [2024-11-20 07:32:36.730574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:12.690 [2024-11-20 07:32:36.730590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:35:12.690 [2024-11-20 07:32:36.730601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.690 [2024-11-20 07:32:36.736278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.690 [2024-11-20 07:32:36.736369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:12.690 [2024-11-20 07:32:36.736406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.642 ms 00:35:12.690 [2024-11-20 07:32:36.736426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.690 [2024-11-20 07:32:36.776494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.690 [2024-11-20 07:32:36.776704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:12.690 [2024-11-20 07:32:36.776734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.947 ms 00:35:12.690 [2024-11-20 07:32:36.776746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.691 [2024-11-20 07:32:36.802104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.691 [2024-11-20 07:32:36.802326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:12.691 [2024-11-20 07:32:36.802360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.260 ms 00:35:12.691 [2024-11-20 07:32:36.802373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.691 [2024-11-20 07:32:36.802565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.691 [2024-11-20 07:32:36.802582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:12.691 [2024-11-20 07:32:36.802600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:35:12.691 [2024-11-20 07:32:36.802612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.691 [2024-11-20 07:32:36.843496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.691 [2024-11-20 07:32:36.843576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:12.691 [2024-11-20 07:32:36.843599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.854 ms 00:35:12.691 [2024-11-20 07:32:36.843611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.691 [2024-11-20 07:32:36.884860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.691 [2024-11-20 07:32:36.884934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:12.691 [2024-11-20 07:32:36.884957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.175 ms 00:35:12.691 [2024-11-20 07:32:36.884972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.951 [2024-11-20 07:32:36.925790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.951 [2024-11-20 07:32:36.925865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:12.951 [2024-11-20 07:32:36.925887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.724 ms 00:35:12.951 [2024-11-20 07:32:36.925903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.951 [2024-11-20 07:32:36.963065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.951 [2024-11-20 07:32:36.963121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:12.951 [2024-11-20 07:32:36.963141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.005 ms 00:35:12.951 [2024-11-20 07:32:36.963152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.951 [2024-11-20 07:32:36.963207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:12.951 [2024-11-20 07:32:36.963228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.963992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.964011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.964023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.964052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.964065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.964079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.964091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:12.951 [2024-11-20 07:32:36.964111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:12.952 [2024-11-20 07:32:36.964630] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:12.952 [2024-11-20 07:32:36.964649] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a6538a1-e573-4c8a-9e5f-4aff796a6df9 00:35:12.952 [2024-11-20 07:32:36.964661] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:12.952 [2024-11-20 07:32:36.964678] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:12.952 [2024-11-20 07:32:36.964689] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:12.952 [2024-11-20 07:32:36.964709] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:12.952 [2024-11-20 07:32:36.964720] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:12.952 [2024-11-20 07:32:36.964733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:12.952 [2024-11-20 07:32:36.964744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:12.952 [2024-11-20 07:32:36.964757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:12.952 [2024-11-20 07:32:36.964767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:12.952 [2024-11-20 07:32:36.964781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.952 [2024-11-20 07:32:36.964792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:12.952 [2024-11-20 07:32:36.964806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:35:12.952 [2024-11-20 07:32:36.964828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.952 [2024-11-20 07:32:36.987716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.952 [2024-11-20 07:32:36.987994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:12.952 [2024-11-20 07:32:36.988027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.816 ms 00:35:12.952 [2024-11-20 07:32:36.988039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.952 [2024-11-20 07:32:36.988655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:12.952 [2024-11-20 07:32:36.988672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:12.952 [2024-11-20 07:32:36.988689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:35:12.952 [2024-11-20 07:32:36.988705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.952 [2024-11-20 07:32:37.060894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:12.952 [2024-11-20 07:32:37.060966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:12.952 [2024-11-20 07:32:37.060987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:12.952 [2024-11-20 07:32:37.061003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.952 [2024-11-20 07:32:37.061117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:12.952 [2024-11-20 07:32:37.061131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:12.952 [2024-11-20 07:32:37.061145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:12.952 [2024-11-20 07:32:37.061162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.952 [2024-11-20 07:32:37.061332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:12.952 [2024-11-20 07:32:37.061347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:12.952 [2024-11-20 07:32:37.061361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:12.952 [2024-11-20 07:32:37.061371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:12.952 [2024-11-20 07:32:37.061403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:12.952 [2024-11-20 07:32:37.061415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:12.952 [2024-11-20 07:32:37.061429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:12.952 [2024-11-20 07:32:37.061440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.201894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.232 [2024-11-20 07:32:37.201972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:13.232 [2024-11-20 07:32:37.201995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.232 [2024-11-20 07:32:37.202007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.312131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.232 [2024-11-20 07:32:37.312402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:13.232 [2024-11-20 07:32:37.312434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.232 [2024-11-20 07:32:37.312451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.312614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.232 [2024-11-20 07:32:37.312628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:13.232 [2024-11-20 07:32:37.312643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.232 [2024-11-20 07:32:37.312654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.312728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.232 [2024-11-20 07:32:37.312741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:13.232 [2024-11-20 07:32:37.312755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.232 [2024-11-20 07:32:37.312766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.312946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.232 [2024-11-20 07:32:37.312961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:13.232 [2024-11-20 07:32:37.312976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.232 [2024-11-20 07:32:37.312986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.313036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.232 [2024-11-20 07:32:37.313050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:13.232 [2024-11-20 07:32:37.313064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.232 [2024-11-20 07:32:37.313076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.313129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.232 [2024-11-20 07:32:37.313146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:13.232 [2024-11-20 07:32:37.313159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.232 [2024-11-20 07:32:37.313170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.313233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:13.232 [2024-11-20 07:32:37.313246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:13.232 [2024-11-20 07:32:37.313260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:13.232 [2024-11-20 07:32:37.313270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:13.232 [2024-11-20 07:32:37.313440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 591.353 ms, result 0 00:35:13.232 true 00:35:13.232 07:32:37 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77083 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77083 ']' 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77083 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77083 00:35:13.232 killing process with pid 77083 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77083' 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77083 00:35:13.232 07:32:37 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77083 00:35:18.567 07:32:42 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:35:23.918 262144+0 records in 00:35:23.918 262144+0 records out 00:35:23.918 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.77287 s, 225 MB/s 00:35:23.918 07:32:47 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:25.438 07:32:49 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:25.438 [2024-11-20 07:32:49.553786] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:35:25.438 [2024-11-20 07:32:49.553974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77347 ] 00:35:25.697 [2024-11-20 07:32:49.765810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.955 [2024-11-20 07:32:49.945545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.214 [2024-11-20 07:32:50.383977] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:26.214 [2024-11-20 07:32:50.384080] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:26.475 [2024-11-20 07:32:50.560947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.561182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:26.475 [2024-11-20 07:32:50.561225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:26.475 [2024-11-20 07:32:50.561238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.561318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.561334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:26.475 [2024-11-20 07:32:50.561355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:35:26.475 [2024-11-20 07:32:50.561367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.561395] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:26.475 [2024-11-20 07:32:50.562709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:26.475 [2024-11-20 07:32:50.562742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.562755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:26.475 [2024-11-20 07:32:50.562768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.353 ms 00:35:26.475 [2024-11-20 07:32:50.562780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.564484] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:26.475 [2024-11-20 07:32:50.587682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.587728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:26.475 [2024-11-20 07:32:50.587746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.196 ms 00:35:26.475 [2024-11-20 07:32:50.587759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.587869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.587885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:26.475 [2024-11-20 07:32:50.587899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:26.475 [2024-11-20 07:32:50.587911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.595033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.595073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:26.475 [2024-11-20 07:32:50.595088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.011 ms 00:35:26.475 [2024-11-20 07:32:50.595100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.595236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.595252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:26.475 [2024-11-20 07:32:50.595265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:35:26.475 [2024-11-20 07:32:50.595277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.595330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.595344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:26.475 [2024-11-20 07:32:50.595356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:26.475 [2024-11-20 07:32:50.595367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.595397] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:26.475 [2024-11-20 07:32:50.601181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.601224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:26.475 [2024-11-20 07:32:50.601261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.791 ms 00:35:26.475 [2024-11-20 07:32:50.601292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.601349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.601366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:26.475 [2024-11-20 07:32:50.601380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:26.475 [2024-11-20 07:32:50.601392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.601461] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:26.475 [2024-11-20 07:32:50.601494] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:26.475 [2024-11-20 07:32:50.601537] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:26.475 [2024-11-20 07:32:50.601566] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:26.475 [2024-11-20 07:32:50.601675] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:26.475 [2024-11-20 07:32:50.601692] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:26.475 [2024-11-20 07:32:50.601708] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:26.475 [2024-11-20 07:32:50.601724] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:26.475 [2024-11-20 07:32:50.601743] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:26.475 [2024-11-20 07:32:50.601766] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:26.475 [2024-11-20 07:32:50.601782] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:26.475 [2024-11-20 07:32:50.601798] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:26.475 [2024-11-20 07:32:50.601848] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:26.475 [2024-11-20 07:32:50.601885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.601901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:26.475 [2024-11-20 07:32:50.601918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:35:26.475 [2024-11-20 07:32:50.601933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.602036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.475 [2024-11-20 07:32:50.602052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:26.475 [2024-11-20 07:32:50.602068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:26.475 [2024-11-20 07:32:50.602084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.475 [2024-11-20 07:32:50.602219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:26.475 [2024-11-20 07:32:50.602254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:26.475 [2024-11-20 07:32:50.602270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:26.475 [2024-11-20 07:32:50.602283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:26.475 [2024-11-20 07:32:50.602303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:26.475 [2024-11-20 07:32:50.602323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:26.475 [2024-11-20 07:32:50.602335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:26.475 [2024-11-20 07:32:50.602346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:26.475 [2024-11-20 07:32:50.602358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:26.475 [2024-11-20 07:32:50.602369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:26.475 [2024-11-20 07:32:50.602380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:26.475 [2024-11-20 07:32:50.602392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:26.475 [2024-11-20 07:32:50.602403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:26.475 [2024-11-20 07:32:50.602414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:26.475 [2024-11-20 07:32:50.602426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:26.475 [2024-11-20 07:32:50.602452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:26.475 [2024-11-20 07:32:50.602463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:26.475 [2024-11-20 07:32:50.602475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:26.475 [2024-11-20 07:32:50.602485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:26.475 [2024-11-20 07:32:50.602498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:26.475 [2024-11-20 07:32:50.602508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:26.475 [2024-11-20 07:32:50.602520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:26.475 [2024-11-20 07:32:50.602531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:26.475 [2024-11-20 07:32:50.602542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:26.475 [2024-11-20 07:32:50.602553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:26.476 [2024-11-20 07:32:50.602565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:26.476 [2024-11-20 07:32:50.602576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:26.476 [2024-11-20 07:32:50.602587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:26.476 [2024-11-20 07:32:50.602598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:26.476 [2024-11-20 07:32:50.602609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:26.476 [2024-11-20 07:32:50.602620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:26.476 [2024-11-20 07:32:50.602631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:26.476 [2024-11-20 07:32:50.602642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:26.476 [2024-11-20 07:32:50.602652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:26.476 [2024-11-20 07:32:50.602663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:26.476 [2024-11-20 07:32:50.602674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:26.476 [2024-11-20 07:32:50.602685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:26.476 [2024-11-20 07:32:50.602696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:26.476 [2024-11-20 07:32:50.602713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:26.476 [2024-11-20 07:32:50.602731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:26.476 [2024-11-20 07:32:50.602742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:26.476 [2024-11-20 07:32:50.602755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:26.476 [2024-11-20 07:32:50.602775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:26.476 [2024-11-20 07:32:50.602795] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:26.476 [2024-11-20 07:32:50.602832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:26.476 [2024-11-20 07:32:50.602850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:26.476 [2024-11-20 07:32:50.602867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:26.476 [2024-11-20 07:32:50.602884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:26.476 [2024-11-20 07:32:50.602899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:26.476 [2024-11-20 07:32:50.602914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:26.476 [2024-11-20 07:32:50.602929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:26.476 [2024-11-20 07:32:50.602943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:26.476 [2024-11-20 07:32:50.602958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:26.476 [2024-11-20 07:32:50.602975] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:26.476 [2024-11-20 07:32:50.602994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:26.476 [2024-11-20 07:32:50.603012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:26.476 [2024-11-20 07:32:50.603028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:26.476 [2024-11-20 07:32:50.603044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:26.476 [2024-11-20 07:32:50.603061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:26.476 [2024-11-20 07:32:50.603077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:26.476 [2024-11-20 07:32:50.603093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:26.476 [2024-11-20 07:32:50.603109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:26.476 [2024-11-20 07:32:50.603125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:26.476 [2024-11-20 07:32:50.603142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:26.476 [2024-11-20 07:32:50.603159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:26.476 [2024-11-20 07:32:50.603181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:26.476 [2024-11-20 07:32:50.603198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:26.476 [2024-11-20 07:32:50.603216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:26.476 [2024-11-20 07:32:50.603240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:26.476 [2024-11-20 07:32:50.603256] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:26.476 [2024-11-20 07:32:50.603284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:26.476 [2024-11-20 07:32:50.603302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:26.476 [2024-11-20 07:32:50.603319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:26.476 [2024-11-20 07:32:50.603336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:26.476 [2024-11-20 07:32:50.603352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:26.476 [2024-11-20 07:32:50.603369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.476 [2024-11-20 07:32:50.603385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:26.476 [2024-11-20 07:32:50.603402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:35:26.476 [2024-11-20 07:32:50.603418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.476 [2024-11-20 07:32:50.656232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.476 [2024-11-20 07:32:50.656296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:26.476 [2024-11-20 07:32:50.656315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.741 ms 00:35:26.476 [2024-11-20 07:32:50.656329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.476 [2024-11-20 07:32:50.656456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.476 [2024-11-20 07:32:50.656470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:26.476 [2024-11-20 07:32:50.656484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:35:26.476 [2024-11-20 07:32:50.656495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.728919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.728986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:26.734 [2024-11-20 07:32:50.729005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.302 ms 00:35:26.734 [2024-11-20 07:32:50.729018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.729087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.729100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:26.734 [2024-11-20 07:32:50.729124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:26.734 [2024-11-20 07:32:50.729136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.729659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.729676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:26.734 [2024-11-20 07:32:50.729690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:35:26.734 [2024-11-20 07:32:50.729701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.729864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.729881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:26.734 [2024-11-20 07:32:50.729894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:35:26.734 [2024-11-20 07:32:50.729918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.755421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.755649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:26.734 [2024-11-20 07:32:50.755686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.474 ms 00:35:26.734 [2024-11-20 07:32:50.755700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.780359] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:35:26.734 [2024-11-20 07:32:50.780424] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:26.734 [2024-11-20 07:32:50.780444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.780458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:26.734 [2024-11-20 07:32:50.780473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.558 ms 00:35:26.734 [2024-11-20 07:32:50.780484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.817934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.817988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:26.734 [2024-11-20 07:32:50.818021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.394 ms 00:35:26.734 [2024-11-20 07:32:50.818034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.841016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.841219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:26.734 [2024-11-20 07:32:50.841245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.909 ms 00:35:26.734 [2024-11-20 07:32:50.841259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.864268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.864442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:26.734 [2024-11-20 07:32:50.864469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.955 ms 00:35:26.734 [2024-11-20 07:32:50.864481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.734 [2024-11-20 07:32:50.865479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.734 [2024-11-20 07:32:50.865514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:26.734 [2024-11-20 07:32:50.865529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:35:26.734 [2024-11-20 07:32:50.865541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.993 [2024-11-20 07:32:50.971857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.993 [2024-11-20 07:32:50.971923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:26.993 [2024-11-20 07:32:50.971943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.279 ms 00:35:26.993 [2024-11-20 07:32:50.971962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.993 [2024-11-20 07:32:50.986622] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:26.993 [2024-11-20 07:32:50.990160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.993 [2024-11-20 07:32:50.990351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:26.993 [2024-11-20 07:32:50.990381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.122 ms 00:35:26.993 [2024-11-20 07:32:50.990394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.993 [2024-11-20 07:32:50.990545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.993 [2024-11-20 07:32:50.990561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:26.993 [2024-11-20 07:32:50.990575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:26.993 [2024-11-20 07:32:50.990588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.994 [2024-11-20 07:32:50.990680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.994 [2024-11-20 07:32:50.990696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:26.994 [2024-11-20 07:32:50.990709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:26.994 [2024-11-20 07:32:50.990722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.994 [2024-11-20 07:32:50.990749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.994 [2024-11-20 07:32:50.990763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:26.994 [2024-11-20 07:32:50.990775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:26.994 [2024-11-20 07:32:50.990787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.994 [2024-11-20 07:32:50.990836] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:26.994 [2024-11-20 07:32:50.990852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.994 [2024-11-20 07:32:50.990868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:26.994 [2024-11-20 07:32:50.990885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:35:26.994 [2024-11-20 07:32:50.990903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.994 [2024-11-20 07:32:51.035511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.994 [2024-11-20 07:32:51.035581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:26.994 [2024-11-20 07:32:51.035611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.565 ms 00:35:26.994 [2024-11-20 07:32:51.035634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.994 [2024-11-20 07:32:51.035777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:26.994 [2024-11-20 07:32:51.035800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:26.994 [2024-11-20 07:32:51.035835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:35:26.994 [2024-11-20 07:32:51.035853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:26.994 [2024-11-20 07:32:51.037446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 475.781 ms, result 0 00:35:27.933  [2024-11-20T07:32:53.073Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-20T07:32:54.448Z] Copying: 57/1024 [MB] (28 MBps) [2024-11-20T07:32:55.410Z] Copying: 88/1024 [MB] (30 MBps) [2024-11-20T07:32:56.346Z] Copying: 120/1024 [MB] (32 MBps) [2024-11-20T07:32:57.283Z] Copying: 151/1024 [MB] (31 MBps) [2024-11-20T07:32:58.221Z] Copying: 182/1024 [MB] (30 MBps) [2024-11-20T07:32:59.228Z] Copying: 212/1024 [MB] (30 MBps) [2024-11-20T07:33:00.163Z] Copying: 246/1024 [MB] (33 MBps) [2024-11-20T07:33:01.101Z] Copying: 279/1024 [MB] (33 MBps) [2024-11-20T07:33:02.478Z] Copying: 310/1024 [MB] (30 MBps) [2024-11-20T07:33:03.415Z] Copying: 340/1024 [MB] (30 MBps) [2024-11-20T07:33:04.352Z] Copying: 370/1024 [MB] (29 MBps) [2024-11-20T07:33:05.289Z] Copying: 400/1024 [MB] (29 MBps) [2024-11-20T07:33:06.225Z] Copying: 429/1024 [MB] (29 MBps) [2024-11-20T07:33:07.162Z] Copying: 460/1024 [MB] (30 MBps) [2024-11-20T07:33:08.098Z] Copying: 489/1024 [MB] (29 MBps) [2024-11-20T07:33:09.477Z] Copying: 519/1024 [MB] (29 MBps) [2024-11-20T07:33:10.414Z] Copying: 549/1024 [MB] (30 MBps) [2024-11-20T07:33:11.414Z] Copying: 581/1024 [MB] (31 MBps) [2024-11-20T07:33:12.369Z] Copying: 611/1024 [MB] (30 MBps) [2024-11-20T07:33:13.310Z] Copying: 642/1024 [MB] (30 MBps) [2024-11-20T07:33:14.247Z] Copying: 672/1024 [MB] (29 MBps) [2024-11-20T07:33:15.183Z] Copying: 703/1024 [MB] (31 MBps) [2024-11-20T07:33:16.122Z] Copying: 732/1024 [MB] (29 MBps) [2024-11-20T07:33:17.114Z] Copying: 762/1024 [MB] (29 MBps) [2024-11-20T07:33:18.489Z] Copying: 791/1024 [MB] (29 MBps) [2024-11-20T07:33:19.058Z] Copying: 821/1024 [MB] (29 MBps) [2024-11-20T07:33:20.435Z] Copying: 849/1024 [MB] (28 MBps) [2024-11-20T07:33:21.371Z] Copying: 880/1024 [MB] (30 MBps) [2024-11-20T07:33:22.308Z] Copying: 911/1024 [MB] (30 MBps) [2024-11-20T07:33:23.289Z] Copying: 942/1024 [MB] (31 MBps) [2024-11-20T07:33:24.228Z] Copying: 972/1024 [MB] (29 MBps) [2024-11-20T07:33:24.795Z] Copying: 1004/1024 [MB] (31 MBps) [2024-11-20T07:33:24.795Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 07:33:24.685580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.592 [2024-11-20 07:33:24.685643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:00.592 [2024-11-20 07:33:24.685662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:00.592 [2024-11-20 07:33:24.685675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.592 [2024-11-20 07:33:24.685710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:00.592 [2024-11-20 07:33:24.690294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.592 [2024-11-20 07:33:24.690341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:00.592 [2024-11-20 07:33:24.690358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.564 ms 00:36:00.592 [2024-11-20 07:33:24.690373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.592 [2024-11-20 07:33:24.692262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.592 [2024-11-20 07:33:24.692303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:00.592 [2024-11-20 07:33:24.692331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.845 ms 00:36:00.592 [2024-11-20 07:33:24.692343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.592 [2024-11-20 07:33:24.706758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.592 [2024-11-20 07:33:24.706804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:00.592 [2024-11-20 07:33:24.706829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.394 ms 00:36:00.592 [2024-11-20 07:33:24.706842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.592 [2024-11-20 07:33:24.712611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.593 [2024-11-20 07:33:24.712664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:00.593 [2024-11-20 07:33:24.712678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.731 ms 00:36:00.593 [2024-11-20 07:33:24.712689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.593 [2024-11-20 07:33:24.755167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.593 [2024-11-20 07:33:24.755234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:00.593 [2024-11-20 07:33:24.755250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.408 ms 00:36:00.593 [2024-11-20 07:33:24.755260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.593 [2024-11-20 07:33:24.778720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.593 [2024-11-20 07:33:24.778774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:00.593 [2024-11-20 07:33:24.778791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.393 ms 00:36:00.593 [2024-11-20 07:33:24.778820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.593 [2024-11-20 07:33:24.779001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.593 [2024-11-20 07:33:24.779017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:00.593 [2024-11-20 07:33:24.779036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:36:00.593 [2024-11-20 07:33:24.779047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.852 [2024-11-20 07:33:24.817709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.852 [2024-11-20 07:33:24.817779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:00.852 [2024-11-20 07:33:24.817796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.640 ms 00:36:00.852 [2024-11-20 07:33:24.817807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.852 [2024-11-20 07:33:24.856422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.852 [2024-11-20 07:33:24.856471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:00.852 [2024-11-20 07:33:24.856500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.555 ms 00:36:00.852 [2024-11-20 07:33:24.856528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.852 [2024-11-20 07:33:24.897237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.852 [2024-11-20 07:33:24.897290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:00.852 [2024-11-20 07:33:24.897308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.662 ms 00:36:00.852 [2024-11-20 07:33:24.897319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.852 [2024-11-20 07:33:24.936734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.852 [2024-11-20 07:33:24.936787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:00.852 [2024-11-20 07:33:24.936805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.308 ms 00:36:00.852 [2024-11-20 07:33:24.936834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.852 [2024-11-20 07:33:24.936885] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:00.852 [2024-11-20 07:33:24.936906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.936920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.936934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.936946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.936959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.936972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.936984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.936997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:00.852 [2024-11-20 07:33:24.937108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.937991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:00.853 [2024-11-20 07:33:24.938183] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:00.853 [2024-11-20 07:33:24.938201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a6538a1-e573-4c8a-9e5f-4aff796a6df9 00:36:00.853 [2024-11-20 07:33:24.938214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:00.853 [2024-11-20 07:33:24.938239] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:00.853 [2024-11-20 07:33:24.938250] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:00.853 [2024-11-20 07:33:24.938262] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:00.853 [2024-11-20 07:33:24.938272] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:00.854 [2024-11-20 07:33:24.938284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:00.854 [2024-11-20 07:33:24.938294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:00.854 [2024-11-20 07:33:24.938316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:00.854 [2024-11-20 07:33:24.938326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:00.854 [2024-11-20 07:33:24.938337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.854 [2024-11-20 07:33:24.938349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:00.854 [2024-11-20 07:33:24.938361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:36:00.854 [2024-11-20 07:33:24.938372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.854 [2024-11-20 07:33:24.960154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.854 [2024-11-20 07:33:24.960196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:00.854 [2024-11-20 07:33:24.960211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.737 ms 00:36:00.854 [2024-11-20 07:33:24.960239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.854 [2024-11-20 07:33:24.960897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:00.854 [2024-11-20 07:33:24.960920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:00.854 [2024-11-20 07:33:24.960933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:36:00.854 [2024-11-20 07:33:24.960945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.854 [2024-11-20 07:33:25.020038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:00.854 [2024-11-20 07:33:25.020089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:00.854 [2024-11-20 07:33:25.020106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:00.854 [2024-11-20 07:33:25.020135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.854 [2024-11-20 07:33:25.020216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:00.854 [2024-11-20 07:33:25.020229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:00.854 [2024-11-20 07:33:25.020241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:00.854 [2024-11-20 07:33:25.020252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.854 [2024-11-20 07:33:25.020373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:00.854 [2024-11-20 07:33:25.020392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:00.854 [2024-11-20 07:33:25.020404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:00.854 [2024-11-20 07:33:25.020415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:00.854 [2024-11-20 07:33:25.020435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:00.854 [2024-11-20 07:33:25.020446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:00.854 [2024-11-20 07:33:25.020457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:00.854 [2024-11-20 07:33:25.020468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.111 [2024-11-20 07:33:25.153735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.111 [2024-11-20 07:33:25.153803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:01.111 [2024-11-20 07:33:25.153828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.111 [2024-11-20 07:33:25.153842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.111 [2024-11-20 07:33:25.263967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.111 [2024-11-20 07:33:25.264046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:01.111 [2024-11-20 07:33:25.264063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.111 [2024-11-20 07:33:25.264075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.111 [2024-11-20 07:33:25.264182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.111 [2024-11-20 07:33:25.264201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:01.111 [2024-11-20 07:33:25.264213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.111 [2024-11-20 07:33:25.264224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.111 [2024-11-20 07:33:25.264273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.111 [2024-11-20 07:33:25.264285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:01.111 [2024-11-20 07:33:25.264296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.111 [2024-11-20 07:33:25.264307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.111 [2024-11-20 07:33:25.264446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.111 [2024-11-20 07:33:25.264465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:01.111 [2024-11-20 07:33:25.264477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.111 [2024-11-20 07:33:25.264487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.111 [2024-11-20 07:33:25.264527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.111 [2024-11-20 07:33:25.264540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:01.111 [2024-11-20 07:33:25.264552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.112 [2024-11-20 07:33:25.264562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.112 [2024-11-20 07:33:25.264603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.112 [2024-11-20 07:33:25.264616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:01.112 [2024-11-20 07:33:25.264631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.112 [2024-11-20 07:33:25.264642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.112 [2024-11-20 07:33:25.264687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:01.112 [2024-11-20 07:33:25.264700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:01.112 [2024-11-20 07:33:25.264711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:01.112 [2024-11-20 07:33:25.264722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:01.112 [2024-11-20 07:33:25.264850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 579.229 ms, result 0 00:36:02.488 00:36:02.488 00:36:02.488 07:33:26 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:36:02.488 [2024-11-20 07:33:26.522024] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:36:02.488 [2024-11-20 07:33:26.522267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77722 ] 00:36:02.747 [2024-11-20 07:33:26.718979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.747 [2024-11-20 07:33:26.843596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.315 [2024-11-20 07:33:27.212483] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:03.315 [2024-11-20 07:33:27.212559] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:03.315 [2024-11-20 07:33:27.374456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.374523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:03.315 [2024-11-20 07:33:27.374554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:03.315 [2024-11-20 07:33:27.374570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.374640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.374658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:03.315 [2024-11-20 07:33:27.374678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:36:03.315 [2024-11-20 07:33:27.374694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.374725] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:03.315 [2024-11-20 07:33:27.375714] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:03.315 [2024-11-20 07:33:27.375756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.375773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:03.315 [2024-11-20 07:33:27.375789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:36:03.315 [2024-11-20 07:33:27.375805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.377645] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:36:03.315 [2024-11-20 07:33:27.396496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.396551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:03.315 [2024-11-20 07:33:27.396572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.852 ms 00:36:03.315 [2024-11-20 07:33:27.396588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.396725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.396745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:03.315 [2024-11-20 07:33:27.396762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:36:03.315 [2024-11-20 07:33:27.396776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.405593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.405647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:03.315 [2024-11-20 07:33:27.405666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.700 ms 00:36:03.315 [2024-11-20 07:33:27.405682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.405800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.405832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:03.315 [2024-11-20 07:33:27.405849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:36:03.315 [2024-11-20 07:33:27.405864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.405931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.405949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:03.315 [2024-11-20 07:33:27.405966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:36:03.315 [2024-11-20 07:33:27.405980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.406017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:03.315 [2024-11-20 07:33:27.411209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.411255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:03.315 [2024-11-20 07:33:27.411274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.200 ms 00:36:03.315 [2024-11-20 07:33:27.411294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.411337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.315 [2024-11-20 07:33:27.411352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:03.315 [2024-11-20 07:33:27.411368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:36:03.315 [2024-11-20 07:33:27.411383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.315 [2024-11-20 07:33:27.411463] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:03.315 [2024-11-20 07:33:27.411495] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:03.315 [2024-11-20 07:33:27.411547] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:03.315 [2024-11-20 07:33:27.411578] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:03.315 [2024-11-20 07:33:27.411689] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:03.315 [2024-11-20 07:33:27.411712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:03.315 [2024-11-20 07:33:27.411730] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:03.315 [2024-11-20 07:33:27.411749] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:03.315 [2024-11-20 07:33:27.411767] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:03.315 [2024-11-20 07:33:27.411784] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:03.315 [2024-11-20 07:33:27.411798] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:03.315 [2024-11-20 07:33:27.411830] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:03.315 [2024-11-20 07:33:27.411846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:03.316 [2024-11-20 07:33:27.411866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.316 [2024-11-20 07:33:27.411882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:03.316 [2024-11-20 07:33:27.411898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:36:03.316 [2024-11-20 07:33:27.411912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.316 [2024-11-20 07:33:27.412010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.316 [2024-11-20 07:33:27.412030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:03.316 [2024-11-20 07:33:27.412047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:36:03.316 [2024-11-20 07:33:27.412062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.316 [2024-11-20 07:33:27.412181] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:03.316 [2024-11-20 07:33:27.412211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:03.316 [2024-11-20 07:33:27.412227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:03.316 [2024-11-20 07:33:27.412271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:03.316 [2024-11-20 07:33:27.412313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:03.316 [2024-11-20 07:33:27.412341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:03.316 [2024-11-20 07:33:27.412355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:03.316 [2024-11-20 07:33:27.412368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:03.316 [2024-11-20 07:33:27.412382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:03.316 [2024-11-20 07:33:27.412396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:03.316 [2024-11-20 07:33:27.412421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:03.316 [2024-11-20 07:33:27.412452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:03.316 [2024-11-20 07:33:27.412494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:03.316 [2024-11-20 07:33:27.412535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:03.316 [2024-11-20 07:33:27.412576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:03.316 [2024-11-20 07:33:27.412616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:03.316 [2024-11-20 07:33:27.412657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:03.316 [2024-11-20 07:33:27.412684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:03.316 [2024-11-20 07:33:27.412698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:03.316 [2024-11-20 07:33:27.412711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:03.316 [2024-11-20 07:33:27.412725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:03.316 [2024-11-20 07:33:27.412738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:03.316 [2024-11-20 07:33:27.412752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:03.316 [2024-11-20 07:33:27.412779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:03.316 [2024-11-20 07:33:27.412792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:03.316 [2024-11-20 07:33:27.412835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:03.316 [2024-11-20 07:33:27.412850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:03.316 [2024-11-20 07:33:27.412880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:03.316 [2024-11-20 07:33:27.412895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:03.316 [2024-11-20 07:33:27.412909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:03.316 [2024-11-20 07:33:27.412924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:03.316 [2024-11-20 07:33:27.412938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:03.316 [2024-11-20 07:33:27.412952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:03.316 [2024-11-20 07:33:27.412968] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:03.316 [2024-11-20 07:33:27.412986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:03.316 [2024-11-20 07:33:27.413003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:03.316 [2024-11-20 07:33:27.413019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:03.316 [2024-11-20 07:33:27.413035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:03.316 [2024-11-20 07:33:27.413051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:03.316 [2024-11-20 07:33:27.413067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:03.316 [2024-11-20 07:33:27.413082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:03.316 [2024-11-20 07:33:27.413098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:03.316 [2024-11-20 07:33:27.413112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:03.316 [2024-11-20 07:33:27.413128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:03.316 [2024-11-20 07:33:27.413143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:03.316 [2024-11-20 07:33:27.413158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:03.316 [2024-11-20 07:33:27.413173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:03.316 [2024-11-20 07:33:27.413188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:03.316 [2024-11-20 07:33:27.413203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:03.316 [2024-11-20 07:33:27.413218] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:03.316 [2024-11-20 07:33:27.413240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:03.316 [2024-11-20 07:33:27.413257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:03.316 [2024-11-20 07:33:27.413272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:03.316 [2024-11-20 07:33:27.413287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:03.316 [2024-11-20 07:33:27.413302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:03.316 [2024-11-20 07:33:27.413318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.316 [2024-11-20 07:33:27.413333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:03.316 [2024-11-20 07:33:27.413349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:36:03.316 [2024-11-20 07:33:27.413363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.316 [2024-11-20 07:33:27.456503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.316 [2024-11-20 07:33:27.456611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:03.316 [2024-11-20 07:33:27.456654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.065 ms 00:36:03.316 [2024-11-20 07:33:27.456685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.316 [2024-11-20 07:33:27.456968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.316 [2024-11-20 07:33:27.457018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:03.316 [2024-11-20 07:33:27.457052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:36:03.316 [2024-11-20 07:33:27.457082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.540636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.540709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:03.576 [2024-11-20 07:33:27.540732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.366 ms 00:36:03.576 [2024-11-20 07:33:27.540749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.540841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.540860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:03.576 [2024-11-20 07:33:27.540878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:03.576 [2024-11-20 07:33:27.540900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.541501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.541532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:03.576 [2024-11-20 07:33:27.541550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:36:03.576 [2024-11-20 07:33:27.541566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.541740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.541761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:03.576 [2024-11-20 07:33:27.541777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:36:03.576 [2024-11-20 07:33:27.541802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.572426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.572493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:03.576 [2024-11-20 07:33:27.572523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.566 ms 00:36:03.576 [2024-11-20 07:33:27.572540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.604231] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:03.576 [2024-11-20 07:33:27.604313] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:03.576 [2024-11-20 07:33:27.604338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.604355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:03.576 [2024-11-20 07:33:27.604373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.606 ms 00:36:03.576 [2024-11-20 07:33:27.604390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.652722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.652802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:03.576 [2024-11-20 07:33:27.652834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.268 ms 00:36:03.576 [2024-11-20 07:33:27.652851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.683446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.683510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:03.576 [2024-11-20 07:33:27.683532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.521 ms 00:36:03.576 [2024-11-20 07:33:27.683549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.712543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.712603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:03.576 [2024-11-20 07:33:27.712625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.934 ms 00:36:03.576 [2024-11-20 07:33:27.712641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.576 [2024-11-20 07:33:27.713801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.576 [2024-11-20 07:33:27.713868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:03.576 [2024-11-20 07:33:27.713888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:36:03.576 [2024-11-20 07:33:27.713910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.810205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.836 [2024-11-20 07:33:27.810280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:03.836 [2024-11-20 07:33:27.810306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.262 ms 00:36:03.836 [2024-11-20 07:33:27.810317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.821795] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:03.836 [2024-11-20 07:33:27.825007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.836 [2024-11-20 07:33:27.825043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:03.836 [2024-11-20 07:33:27.825060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.600 ms 00:36:03.836 [2024-11-20 07:33:27.825070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.825193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.836 [2024-11-20 07:33:27.825208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:03.836 [2024-11-20 07:33:27.825220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:03.836 [2024-11-20 07:33:27.825234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.825314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.836 [2024-11-20 07:33:27.825327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:03.836 [2024-11-20 07:33:27.825338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:36:03.836 [2024-11-20 07:33:27.825349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.825369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.836 [2024-11-20 07:33:27.825380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:03.836 [2024-11-20 07:33:27.825391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:03.836 [2024-11-20 07:33:27.825401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.825438] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:03.836 [2024-11-20 07:33:27.825454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.836 [2024-11-20 07:33:27.825464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:03.836 [2024-11-20 07:33:27.825474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:36:03.836 [2024-11-20 07:33:27.825485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.862931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.836 [2024-11-20 07:33:27.862984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:03.836 [2024-11-20 07:33:27.863000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.425 ms 00:36:03.836 [2024-11-20 07:33:27.863017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.863102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:03.836 [2024-11-20 07:33:27.863116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:03.836 [2024-11-20 07:33:27.863127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:36:03.836 [2024-11-20 07:33:27.863137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:03.836 [2024-11-20 07:33:27.864430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 489.500 ms, result 0 00:36:05.213  [2024-11-20T07:33:30.356Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-20T07:33:31.294Z] Copying: 64/1024 [MB] (32 MBps) [2024-11-20T07:33:32.230Z] Copying: 93/1024 [MB] (29 MBps) [2024-11-20T07:33:33.168Z] Copying: 125/1024 [MB] (31 MBps) [2024-11-20T07:33:34.105Z] Copying: 157/1024 [MB] (31 MBps) [2024-11-20T07:33:35.567Z] Copying: 189/1024 [MB] (32 MBps) [2024-11-20T07:33:36.134Z] Copying: 221/1024 [MB] (32 MBps) [2024-11-20T07:33:37.510Z] Copying: 253/1024 [MB] (31 MBps) [2024-11-20T07:33:38.447Z] Copying: 284/1024 [MB] (31 MBps) [2024-11-20T07:33:39.382Z] Copying: 316/1024 [MB] (31 MBps) [2024-11-20T07:33:40.319Z] Copying: 347/1024 [MB] (31 MBps) [2024-11-20T07:33:41.256Z] Copying: 380/1024 [MB] (32 MBps) [2024-11-20T07:33:42.191Z] Copying: 412/1024 [MB] (32 MBps) [2024-11-20T07:33:43.153Z] Copying: 445/1024 [MB] (32 MBps) [2024-11-20T07:33:44.529Z] Copying: 477/1024 [MB] (32 MBps) [2024-11-20T07:33:45.465Z] Copying: 509/1024 [MB] (32 MBps) [2024-11-20T07:33:46.401Z] Copying: 541/1024 [MB] (31 MBps) [2024-11-20T07:33:47.337Z] Copying: 573/1024 [MB] (32 MBps) [2024-11-20T07:33:48.273Z] Copying: 606/1024 [MB] (32 MBps) [2024-11-20T07:33:49.210Z] Copying: 638/1024 [MB] (31 MBps) [2024-11-20T07:33:50.146Z] Copying: 670/1024 [MB] (32 MBps) [2024-11-20T07:33:51.105Z] Copying: 701/1024 [MB] (31 MBps) [2024-11-20T07:33:52.481Z] Copying: 731/1024 [MB] (29 MBps) [2024-11-20T07:33:53.417Z] Copying: 764/1024 [MB] (32 MBps) [2024-11-20T07:33:54.352Z] Copying: 796/1024 [MB] (32 MBps) [2024-11-20T07:33:55.287Z] Copying: 826/1024 [MB] (30 MBps) [2024-11-20T07:33:56.222Z] Copying: 857/1024 [MB] (31 MBps) [2024-11-20T07:33:57.159Z] Copying: 888/1024 [MB] (30 MBps) [2024-11-20T07:33:58.531Z] Copying: 920/1024 [MB] (31 MBps) [2024-11-20T07:33:59.115Z] Copying: 950/1024 [MB] (30 MBps) [2024-11-20T07:34:00.491Z] Copying: 982/1024 [MB] (31 MBps) [2024-11-20T07:34:00.491Z] Copying: 1014/1024 [MB] (31 MBps) [2024-11-20T07:34:00.491Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-20 07:34:00.479779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.288 [2024-11-20 07:34:00.479872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:36.288 [2024-11-20 07:34:00.479897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:36.288 [2024-11-20 07:34:00.479913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.288 [2024-11-20 07:34:00.479949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:36.288 [2024-11-20 07:34:00.487648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.288 [2024-11-20 07:34:00.487697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:36.288 [2024-11-20 07:34:00.487725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.673 ms 00:36:36.288 [2024-11-20 07:34:00.487741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.288 [2024-11-20 07:34:00.488075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.288 [2024-11-20 07:34:00.488107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:36.288 [2024-11-20 07:34:00.488125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:36:36.288 [2024-11-20 07:34:00.488140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.491683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.491708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:36.548 [2024-11-20 07:34:00.491721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.520 ms 00:36:36.548 [2024-11-20 07:34:00.491732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.496866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.496898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:36.548 [2024-11-20 07:34:00.496910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.110 ms 00:36:36.548 [2024-11-20 07:34:00.496920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.535015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.535056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:36.548 [2024-11-20 07:34:00.535070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.023 ms 00:36:36.548 [2024-11-20 07:34:00.535081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.556852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.556893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:36.548 [2024-11-20 07:34:00.556908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.730 ms 00:36:36.548 [2024-11-20 07:34:00.556918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.557048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.557068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:36.548 [2024-11-20 07:34:00.557080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:36:36.548 [2024-11-20 07:34:00.557090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.594738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.594778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:36.548 [2024-11-20 07:34:00.594791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.630 ms 00:36:36.548 [2024-11-20 07:34:00.594801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.631548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.631597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:36.548 [2024-11-20 07:34:00.631610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.700 ms 00:36:36.548 [2024-11-20 07:34:00.631620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.668021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.668071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:36.548 [2024-11-20 07:34:00.668085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.363 ms 00:36:36.548 [2024-11-20 07:34:00.668095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.704490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.548 [2024-11-20 07:34:00.704528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:36.548 [2024-11-20 07:34:00.704541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.317 ms 00:36:36.548 [2024-11-20 07:34:00.704551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.548 [2024-11-20 07:34:00.704588] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:36.548 [2024-11-20 07:34:00.704605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.704998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:36.548 [2024-11-20 07:34:00.705184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:36.549 [2024-11-20 07:34:00.705705] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:36.549 [2024-11-20 07:34:00.705719] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a6538a1-e573-4c8a-9e5f-4aff796a6df9 00:36:36.549 [2024-11-20 07:34:00.705730] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:36.549 [2024-11-20 07:34:00.705740] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:36.549 [2024-11-20 07:34:00.705749] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:36.549 [2024-11-20 07:34:00.705760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:36.549 [2024-11-20 07:34:00.705770] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:36.549 [2024-11-20 07:34:00.705780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:36.549 [2024-11-20 07:34:00.705801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:36.549 [2024-11-20 07:34:00.705810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:36.549 [2024-11-20 07:34:00.705834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:36.549 [2024-11-20 07:34:00.705845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.549 [2024-11-20 07:34:00.705855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:36.549 [2024-11-20 07:34:00.705866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:36:36.549 [2024-11-20 07:34:00.705876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.549 [2024-11-20 07:34:00.726589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.549 [2024-11-20 07:34:00.726627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:36.549 [2024-11-20 07:34:00.726641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.676 ms 00:36:36.549 [2024-11-20 07:34:00.726653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.549 [2024-11-20 07:34:00.727270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:36.549 [2024-11-20 07:34:00.727292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:36.549 [2024-11-20 07:34:00.727303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:36:36.549 [2024-11-20 07:34:00.727320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.808 [2024-11-20 07:34:00.783338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.808 [2024-11-20 07:34:00.783398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:36.808 [2024-11-20 07:34:00.783413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.808 [2024-11-20 07:34:00.783425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.808 [2024-11-20 07:34:00.783495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.808 [2024-11-20 07:34:00.783507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:36.808 [2024-11-20 07:34:00.783519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.808 [2024-11-20 07:34:00.783536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.808 [2024-11-20 07:34:00.783618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.808 [2024-11-20 07:34:00.783634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:36.808 [2024-11-20 07:34:00.783645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.808 [2024-11-20 07:34:00.783656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.808 [2024-11-20 07:34:00.783676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.808 [2024-11-20 07:34:00.783687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:36.808 [2024-11-20 07:34:00.783698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.808 [2024-11-20 07:34:00.783709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:36.808 [2024-11-20 07:34:00.912433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:36.808 [2024-11-20 07:34:00.912501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:36.808 [2024-11-20 07:34:00.912517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:36.808 [2024-11-20 07:34:00.912527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.067 [2024-11-20 07:34:01.018574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.067 [2024-11-20 07:34:01.018640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:37.067 [2024-11-20 07:34:01.018655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.067 [2024-11-20 07:34:01.018666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.067 [2024-11-20 07:34:01.018770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.067 [2024-11-20 07:34:01.018782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:37.067 [2024-11-20 07:34:01.018794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.067 [2024-11-20 07:34:01.018804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.067 [2024-11-20 07:34:01.018864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.067 [2024-11-20 07:34:01.018877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:37.067 [2024-11-20 07:34:01.018887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.067 [2024-11-20 07:34:01.018897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.067 [2024-11-20 07:34:01.019021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.067 [2024-11-20 07:34:01.019035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:37.067 [2024-11-20 07:34:01.019045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.067 [2024-11-20 07:34:01.019055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.067 [2024-11-20 07:34:01.019090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.067 [2024-11-20 07:34:01.019103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:37.067 [2024-11-20 07:34:01.019114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.067 [2024-11-20 07:34:01.019123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.067 [2024-11-20 07:34:01.019160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.067 [2024-11-20 07:34:01.019176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:37.067 [2024-11-20 07:34:01.019188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.067 [2024-11-20 07:34:01.019198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.067 [2024-11-20 07:34:01.019240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:37.067 [2024-11-20 07:34:01.019252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:37.067 [2024-11-20 07:34:01.019262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:37.067 [2024-11-20 07:34:01.019272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:37.067 [2024-11-20 07:34:01.019390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.584 ms, result 0 00:36:38.003 00:36:38.003 00:36:38.003 07:34:02 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:39.905 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:39.905 07:34:03 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:36:39.905 [2024-11-20 07:34:04.069978] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:36:39.905 [2024-11-20 07:34:04.070151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78103 ] 00:36:40.164 [2024-11-20 07:34:04.244577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.423 [2024-11-20 07:34:04.366163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.682 [2024-11-20 07:34:04.740003] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:40.682 [2024-11-20 07:34:04.740078] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:40.942 [2024-11-20 07:34:04.901078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.942 [2024-11-20 07:34:04.901155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:40.942 [2024-11-20 07:34:04.901177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:40.942 [2024-11-20 07:34:04.901187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.942 [2024-11-20 07:34:04.901237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.942 [2024-11-20 07:34:04.901250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:40.942 [2024-11-20 07:34:04.901264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:36:40.942 [2024-11-20 07:34:04.901274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.901296] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:40.943 [2024-11-20 07:34:04.902264] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:40.943 [2024-11-20 07:34:04.902296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.902307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:40.943 [2024-11-20 07:34:04.902319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:36:40.943 [2024-11-20 07:34:04.902328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.903730] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:36:40.943 [2024-11-20 07:34:04.923944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.923985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:40.943 [2024-11-20 07:34:04.924016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.214 ms 00:36:40.943 [2024-11-20 07:34:04.924026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.924094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.924108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:40.943 [2024-11-20 07:34:04.924119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:36:40.943 [2024-11-20 07:34:04.924129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.930782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.930821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:40.943 [2024-11-20 07:34:04.930834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.578 ms 00:36:40.943 [2024-11-20 07:34:04.930846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.930929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.930942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:40.943 [2024-11-20 07:34:04.930953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:36:40.943 [2024-11-20 07:34:04.930964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.931006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.931018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:40.943 [2024-11-20 07:34:04.931029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:40.943 [2024-11-20 07:34:04.931039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.931065] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:40.943 [2024-11-20 07:34:04.935852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.935886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:40.943 [2024-11-20 07:34:04.935900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.793 ms 00:36:40.943 [2024-11-20 07:34:04.935930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.935972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.935984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:40.943 [2024-11-20 07:34:04.935995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:40.943 [2024-11-20 07:34:04.936005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.936062] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:40.943 [2024-11-20 07:34:04.936086] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:40.943 [2024-11-20 07:34:04.936122] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:40.943 [2024-11-20 07:34:04.936160] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:40.943 [2024-11-20 07:34:04.936264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:40.943 [2024-11-20 07:34:04.936277] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:40.943 [2024-11-20 07:34:04.936290] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:40.943 [2024-11-20 07:34:04.936304] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936316] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936328] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:40.943 [2024-11-20 07:34:04.936338] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:40.943 [2024-11-20 07:34:04.936348] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:40.943 [2024-11-20 07:34:04.936358] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:40.943 [2024-11-20 07:34:04.936372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.936382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:40.943 [2024-11-20 07:34:04.936393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:36:40.943 [2024-11-20 07:34:04.936402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.936478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.943 [2024-11-20 07:34:04.936491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:40.943 [2024-11-20 07:34:04.936501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:36:40.943 [2024-11-20 07:34:04.936511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.943 [2024-11-20 07:34:04.936629] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:40.943 [2024-11-20 07:34:04.936649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:40.943 [2024-11-20 07:34:04.936661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:40.943 [2024-11-20 07:34:04.936694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:40.943 [2024-11-20 07:34:04.936726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:40.943 [2024-11-20 07:34:04.936746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:40.943 [2024-11-20 07:34:04.936756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:40.943 [2024-11-20 07:34:04.936766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:40.943 [2024-11-20 07:34:04.936777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:40.943 [2024-11-20 07:34:04.936787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:40.943 [2024-11-20 07:34:04.936807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:40.943 [2024-11-20 07:34:04.936827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:40.943 [2024-11-20 07:34:04.936858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:40.943 [2024-11-20 07:34:04.936906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:40.943 [2024-11-20 07:34:04.936936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:40.943 [2024-11-20 07:34:04.936967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:40.943 [2024-11-20 07:34:04.936977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:40.943 [2024-11-20 07:34:04.936987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:40.943 [2024-11-20 07:34:04.936997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:40.943 [2024-11-20 07:34:04.937007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:40.943 [2024-11-20 07:34:04.937017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:40.943 [2024-11-20 07:34:04.937027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:40.943 [2024-11-20 07:34:04.937037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:40.943 [2024-11-20 07:34:04.937047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:40.943 [2024-11-20 07:34:04.937057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:40.943 [2024-11-20 07:34:04.937067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:40.943 [2024-11-20 07:34:04.937077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:40.943 [2024-11-20 07:34:04.937087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:40.943 [2024-11-20 07:34:04.937096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:40.943 [2024-11-20 07:34:04.937106] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:40.943 [2024-11-20 07:34:04.937117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:40.943 [2024-11-20 07:34:04.937129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:40.943 [2024-11-20 07:34:04.937139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:40.944 [2024-11-20 07:34:04.937150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:40.944 [2024-11-20 07:34:04.937160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:40.944 [2024-11-20 07:34:04.937170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:40.944 [2024-11-20 07:34:04.937181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:40.944 [2024-11-20 07:34:04.937190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:40.944 [2024-11-20 07:34:04.937200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:40.944 [2024-11-20 07:34:04.937212] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:40.944 [2024-11-20 07:34:04.937225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:40.944 [2024-11-20 07:34:04.937237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:40.944 [2024-11-20 07:34:04.937249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:40.944 [2024-11-20 07:34:04.937260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:40.944 [2024-11-20 07:34:04.937271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:40.944 [2024-11-20 07:34:04.937282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:40.944 [2024-11-20 07:34:04.937293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:40.944 [2024-11-20 07:34:04.937304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:40.944 [2024-11-20 07:34:04.937315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:40.944 [2024-11-20 07:34:04.937325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:40.944 [2024-11-20 07:34:04.937336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:40.944 [2024-11-20 07:34:04.937348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:40.944 [2024-11-20 07:34:04.937359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:40.944 [2024-11-20 07:34:04.937370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:40.944 [2024-11-20 07:34:04.937381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:40.944 [2024-11-20 07:34:04.937392] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:40.944 [2024-11-20 07:34:04.937408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:40.944 [2024-11-20 07:34:04.937421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:40.944 [2024-11-20 07:34:04.937432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:40.944 [2024-11-20 07:34:04.937444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:40.944 [2024-11-20 07:34:04.937456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:40.944 [2024-11-20 07:34:04.937468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:04.937479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:40.944 [2024-11-20 07:34:04.937491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:36:40.944 [2024-11-20 07:34:04.937501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:04.978615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:04.978662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:40.944 [2024-11-20 07:34:04.978677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.062 ms 00:36:40.944 [2024-11-20 07:34:04.978689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:04.978784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:04.978795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:40.944 [2024-11-20 07:34:04.978806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:36:40.944 [2024-11-20 07:34:04.978828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:05.042990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:05.043036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:40.944 [2024-11-20 07:34:05.043068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.088 ms 00:36:40.944 [2024-11-20 07:34:05.043080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:05.043135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:05.043148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:40.944 [2024-11-20 07:34:05.043160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:40.944 [2024-11-20 07:34:05.043176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:05.043694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:05.043718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:40.944 [2024-11-20 07:34:05.043730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:36:40.944 [2024-11-20 07:34:05.043742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:05.043897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:05.043912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:40.944 [2024-11-20 07:34:05.043922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:36:40.944 [2024-11-20 07:34:05.043939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:05.062919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:05.062965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:40.944 [2024-11-20 07:34:05.062985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.956 ms 00:36:40.944 [2024-11-20 07:34:05.062996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:05.082744] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:40.944 [2024-11-20 07:34:05.082785] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:40.944 [2024-11-20 07:34:05.082802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:05.082821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:40.944 [2024-11-20 07:34:05.082834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.671 ms 00:36:40.944 [2024-11-20 07:34:05.082844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:05.114131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:05.114187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:40.944 [2024-11-20 07:34:05.114202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.241 ms 00:36:40.944 [2024-11-20 07:34:05.114213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:40.944 [2024-11-20 07:34:05.134255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:40.944 [2024-11-20 07:34:05.134320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:40.944 [2024-11-20 07:34:05.134337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.991 ms 00:36:40.944 [2024-11-20 07:34:05.134348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.154521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.154567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:41.205 [2024-11-20 07:34:05.154583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.123 ms 00:36:41.205 [2024-11-20 07:34:05.154610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.155626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.155660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:41.205 [2024-11-20 07:34:05.155674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:36:41.205 [2024-11-20 07:34:05.155690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.248898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.248960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:41.205 [2024-11-20 07:34:05.248986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.176 ms 00:36:41.205 [2024-11-20 07:34:05.248998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.261592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:41.205 [2024-11-20 07:34:05.265003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.265038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:41.205 [2024-11-20 07:34:05.265053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.937 ms 00:36:41.205 [2024-11-20 07:34:05.265064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.265189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.265203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:41.205 [2024-11-20 07:34:05.265216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:41.205 [2024-11-20 07:34:05.265231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.265330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.265343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:41.205 [2024-11-20 07:34:05.265355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:36:41.205 [2024-11-20 07:34:05.265366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.265392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.265405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:41.205 [2024-11-20 07:34:05.265416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:41.205 [2024-11-20 07:34:05.265427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.265461] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:41.205 [2024-11-20 07:34:05.265477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.265489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:41.205 [2024-11-20 07:34:05.265500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:36:41.205 [2024-11-20 07:34:05.265510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.304655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.304702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:41.205 [2024-11-20 07:34:05.304718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.120 ms 00:36:41.205 [2024-11-20 07:34:05.304736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.304827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:41.205 [2024-11-20 07:34:05.304841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:41.205 [2024-11-20 07:34:05.304852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:36:41.205 [2024-11-20 07:34:05.304863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:41.205 [2024-11-20 07:34:05.306066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.503 ms, result 0 00:36:42.149  [2024-11-20T07:34:07.733Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-20T07:34:08.669Z] Copying: 62/1024 [MB] (31 MBps) [2024-11-20T07:34:09.604Z] Copying: 94/1024 [MB] (31 MBps) [2024-11-20T07:34:10.541Z] Copying: 121/1024 [MB] (27 MBps) [2024-11-20T07:34:11.478Z] Copying: 151/1024 [MB] (29 MBps) [2024-11-20T07:34:12.414Z] Copying: 182/1024 [MB] (30 MBps) [2024-11-20T07:34:13.394Z] Copying: 212/1024 [MB] (30 MBps) [2024-11-20T07:34:14.329Z] Copying: 242/1024 [MB] (30 MBps) [2024-11-20T07:34:15.706Z] Copying: 273/1024 [MB] (30 MBps) [2024-11-20T07:34:16.640Z] Copying: 303/1024 [MB] (30 MBps) [2024-11-20T07:34:17.576Z] Copying: 334/1024 [MB] (30 MBps) [2024-11-20T07:34:18.513Z] Copying: 364/1024 [MB] (30 MBps) [2024-11-20T07:34:19.449Z] Copying: 394/1024 [MB] (30 MBps) [2024-11-20T07:34:20.383Z] Copying: 425/1024 [MB] (30 MBps) [2024-11-20T07:34:21.758Z] Copying: 455/1024 [MB] (30 MBps) [2024-11-20T07:34:22.324Z] Copying: 485/1024 [MB] (29 MBps) [2024-11-20T07:34:23.702Z] Copying: 516/1024 [MB] (31 MBps) [2024-11-20T07:34:24.638Z] Copying: 545/1024 [MB] (29 MBps) [2024-11-20T07:34:25.576Z] Copying: 575/1024 [MB] (29 MBps) [2024-11-20T07:34:26.510Z] Copying: 606/1024 [MB] (30 MBps) [2024-11-20T07:34:27.442Z] Copying: 636/1024 [MB] (30 MBps) [2024-11-20T07:34:28.376Z] Copying: 669/1024 [MB] (33 MBps) [2024-11-20T07:34:29.749Z] Copying: 702/1024 [MB] (32 MBps) [2024-11-20T07:34:30.681Z] Copying: 736/1024 [MB] (33 MBps) [2024-11-20T07:34:31.619Z] Copying: 771/1024 [MB] (34 MBps) [2024-11-20T07:34:32.554Z] Copying: 807/1024 [MB] (36 MBps) [2024-11-20T07:34:33.489Z] Copying: 842/1024 [MB] (35 MBps) [2024-11-20T07:34:34.424Z] Copying: 877/1024 [MB] (34 MBps) [2024-11-20T07:34:35.360Z] Copying: 913/1024 [MB] (36 MBps) [2024-11-20T07:34:36.734Z] Copying: 950/1024 [MB] (36 MBps) [2024-11-20T07:34:37.714Z] Copying: 986/1024 [MB] (36 MBps) [2024-11-20T07:34:38.669Z] Copying: 1021/1024 [MB] (34 MBps) [2024-11-20T07:34:38.669Z] Copying: 1048544/1048576 [kB] (2916 kBps) [2024-11-20T07:34:38.669Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 07:34:38.366589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.366672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:14.466 [2024-11-20 07:34:38.366695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:14.466 [2024-11-20 07:34:38.366724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.369098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:14.466 [2024-11-20 07:34:38.376693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.376736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:14.466 [2024-11-20 07:34:38.376755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.539 ms 00:37:14.466 [2024-11-20 07:34:38.376768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.388068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.388116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:14.466 [2024-11-20 07:34:38.388133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.694 ms 00:37:14.466 [2024-11-20 07:34:38.388146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.410112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.410159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:14.466 [2024-11-20 07:34:38.410176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.937 ms 00:37:14.466 [2024-11-20 07:34:38.410189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.416097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.416150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:14.466 [2024-11-20 07:34:38.416164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.868 ms 00:37:14.466 [2024-11-20 07:34:38.416175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.456928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.456967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:14.466 [2024-11-20 07:34:38.456982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.681 ms 00:37:14.466 [2024-11-20 07:34:38.456992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.480170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.480218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:14.466 [2024-11-20 07:34:38.480233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.139 ms 00:37:14.466 [2024-11-20 07:34:38.480245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.562473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.562533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:14.466 [2024-11-20 07:34:38.562551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.179 ms 00:37:14.466 [2024-11-20 07:34:38.562565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.606407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.606462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:14.466 [2024-11-20 07:34:38.606478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.818 ms 00:37:14.466 [2024-11-20 07:34:38.606489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.466 [2024-11-20 07:34:38.649340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.466 [2024-11-20 07:34:38.649404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:14.466 [2024-11-20 07:34:38.649420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.804 ms 00:37:14.466 [2024-11-20 07:34:38.649431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.726 [2024-11-20 07:34:38.690968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.726 [2024-11-20 07:34:38.691028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:14.726 [2024-11-20 07:34:38.691046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.487 ms 00:37:14.726 [2024-11-20 07:34:38.691058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.726 [2024-11-20 07:34:38.734156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.726 [2024-11-20 07:34:38.734251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:14.726 [2024-11-20 07:34:38.734270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.962 ms 00:37:14.726 [2024-11-20 07:34:38.734282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.726 [2024-11-20 07:34:38.734342] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:14.726 [2024-11-20 07:34:38.734363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120064 / 261120 wr_cnt: 1 state: open 00:37:14.726 [2024-11-20 07:34:38.734379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.734994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:14.726 [2024-11-20 07:34:38.735175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:14.727 [2024-11-20 07:34:38.735574] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:14.727 [2024-11-20 07:34:38.735585] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a6538a1-e573-4c8a-9e5f-4aff796a6df9 00:37:14.727 [2024-11-20 07:34:38.735596] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120064 00:37:14.727 [2024-11-20 07:34:38.735606] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121024 00:37:14.727 [2024-11-20 07:34:38.735616] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120064 00:37:14.727 [2024-11-20 07:34:38.735627] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:37:14.727 [2024-11-20 07:34:38.735636] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:14.727 [2024-11-20 07:34:38.735653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:14.727 [2024-11-20 07:34:38.735674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:14.727 [2024-11-20 07:34:38.735684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:14.727 [2024-11-20 07:34:38.735692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:14.727 [2024-11-20 07:34:38.735702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.727 [2024-11-20 07:34:38.735713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:14.727 [2024-11-20 07:34:38.735724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:37:14.727 [2024-11-20 07:34:38.735734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.727 [2024-11-20 07:34:38.759246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.727 [2024-11-20 07:34:38.759303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:14.727 [2024-11-20 07:34:38.759320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.466 ms 00:37:14.727 [2024-11-20 07:34:38.759342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.727 [2024-11-20 07:34:38.760026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:14.727 [2024-11-20 07:34:38.760049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:14.727 [2024-11-20 07:34:38.760061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:37:14.727 [2024-11-20 07:34:38.760072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.727 [2024-11-20 07:34:38.819454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.727 [2024-11-20 07:34:38.819520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:14.727 [2024-11-20 07:34:38.819544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.727 [2024-11-20 07:34:38.819556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.727 [2024-11-20 07:34:38.819638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.727 [2024-11-20 07:34:38.819651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:14.727 [2024-11-20 07:34:38.819662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.727 [2024-11-20 07:34:38.819673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.727 [2024-11-20 07:34:38.819785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.727 [2024-11-20 07:34:38.819800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:14.727 [2024-11-20 07:34:38.819824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.727 [2024-11-20 07:34:38.819856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.727 [2024-11-20 07:34:38.819877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.727 [2024-11-20 07:34:38.819889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:14.727 [2024-11-20 07:34:38.819901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.727 [2024-11-20 07:34:38.819912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:38.964680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.986 [2024-11-20 07:34:38.964750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:14.986 [2024-11-20 07:34:38.964778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.986 [2024-11-20 07:34:38.964790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:39.080088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.986 [2024-11-20 07:34:39.080154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:14.986 [2024-11-20 07:34:39.080171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.986 [2024-11-20 07:34:39.080182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:39.080289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.986 [2024-11-20 07:34:39.080302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:14.986 [2024-11-20 07:34:39.080313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.986 [2024-11-20 07:34:39.080323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:39.080374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.986 [2024-11-20 07:34:39.080386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:14.986 [2024-11-20 07:34:39.080396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.986 [2024-11-20 07:34:39.080406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:39.080526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.986 [2024-11-20 07:34:39.080539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:14.986 [2024-11-20 07:34:39.080550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.986 [2024-11-20 07:34:39.080560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:39.080606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.986 [2024-11-20 07:34:39.080619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:14.986 [2024-11-20 07:34:39.080630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.986 [2024-11-20 07:34:39.080640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:39.080679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.986 [2024-11-20 07:34:39.080690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:14.986 [2024-11-20 07:34:39.080700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.986 [2024-11-20 07:34:39.080710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:39.080759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:14.986 [2024-11-20 07:34:39.080771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:14.986 [2024-11-20 07:34:39.080782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:14.986 [2024-11-20 07:34:39.080792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:14.986 [2024-11-20 07:34:39.080962] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 716.170 ms, result 0 00:37:16.362 00:37:16.362 00:37:16.621 07:34:40 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:37:16.621 [2024-11-20 07:34:40.697766] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:37:16.621 [2024-11-20 07:34:40.697976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78468 ] 00:37:16.879 [2024-11-20 07:34:40.889572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.879 [2024-11-20 07:34:41.016289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.446 [2024-11-20 07:34:41.412576] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:17.446 [2024-11-20 07:34:41.412676] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:17.446 [2024-11-20 07:34:41.576826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.576887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:17.446 [2024-11-20 07:34:41.576917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:17.446 [2024-11-20 07:34:41.576928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.576982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.576995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:17.446 [2024-11-20 07:34:41.577010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:37:17.446 [2024-11-20 07:34:41.577020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.577042] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:17.446 [2024-11-20 07:34:41.578252] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:17.446 [2024-11-20 07:34:41.578292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.578305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:17.446 [2024-11-20 07:34:41.578318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:37:17.446 [2024-11-20 07:34:41.578330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.579953] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:37:17.446 [2024-11-20 07:34:41.600911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.600965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:37:17.446 [2024-11-20 07:34:41.600999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.958 ms 00:37:17.446 [2024-11-20 07:34:41.601013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.601103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.601119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:37:17.446 [2024-11-20 07:34:41.601132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:37:17.446 [2024-11-20 07:34:41.601144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.608542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.608589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:17.446 [2024-11-20 07:34:41.608603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.305 ms 00:37:17.446 [2024-11-20 07:34:41.608615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.608730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.608748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:17.446 [2024-11-20 07:34:41.608761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:37:17.446 [2024-11-20 07:34:41.608772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.608849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.608865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:17.446 [2024-11-20 07:34:41.608878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:37:17.446 [2024-11-20 07:34:41.608890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.608922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:17.446 [2024-11-20 07:34:41.614297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.446 [2024-11-20 07:34:41.614338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:17.446 [2024-11-20 07:34:41.614353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.383 ms 00:37:17.446 [2024-11-20 07:34:41.614369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.446 [2024-11-20 07:34:41.614414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.447 [2024-11-20 07:34:41.614427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:17.447 [2024-11-20 07:34:41.614439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:37:17.447 [2024-11-20 07:34:41.614452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.447 [2024-11-20 07:34:41.614519] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:37:17.447 [2024-11-20 07:34:41.614547] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:37:17.447 [2024-11-20 07:34:41.614590] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:37:17.447 [2024-11-20 07:34:41.614615] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:37:17.447 [2024-11-20 07:34:41.614722] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:17.447 [2024-11-20 07:34:41.614738] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:17.447 [2024-11-20 07:34:41.614754] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:17.447 [2024-11-20 07:34:41.614786] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:17.447 [2024-11-20 07:34:41.614801] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:17.447 [2024-11-20 07:34:41.614814] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:17.447 [2024-11-20 07:34:41.614827] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:17.447 [2024-11-20 07:34:41.614853] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:17.447 [2024-11-20 07:34:41.614865] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:17.447 [2024-11-20 07:34:41.614882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.447 [2024-11-20 07:34:41.614894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:17.447 [2024-11-20 07:34:41.614907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:37:17.447 [2024-11-20 07:34:41.614920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.447 [2024-11-20 07:34:41.615012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.447 [2024-11-20 07:34:41.615025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:17.447 [2024-11-20 07:34:41.615038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:37:17.447 [2024-11-20 07:34:41.615050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.447 [2024-11-20 07:34:41.615165] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:17.447 [2024-11-20 07:34:41.615187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:17.447 [2024-11-20 07:34:41.615201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:17.447 [2024-11-20 07:34:41.615249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:17.447 [2024-11-20 07:34:41.615282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:17.447 [2024-11-20 07:34:41.615304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:17.447 [2024-11-20 07:34:41.615315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:17.447 [2024-11-20 07:34:41.615326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:17.447 [2024-11-20 07:34:41.615336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:17.447 [2024-11-20 07:34:41.615347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:17.447 [2024-11-20 07:34:41.615369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:17.447 [2024-11-20 07:34:41.615391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:17.447 [2024-11-20 07:34:41.615424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:17.447 [2024-11-20 07:34:41.615471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:17.447 [2024-11-20 07:34:41.615503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:17.447 [2024-11-20 07:34:41.615536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:17.447 [2024-11-20 07:34:41.615569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:17.447 [2024-11-20 07:34:41.615590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:17.447 [2024-11-20 07:34:41.615601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:17.447 [2024-11-20 07:34:41.615611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:17.447 [2024-11-20 07:34:41.615622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:17.447 [2024-11-20 07:34:41.615632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:17.447 [2024-11-20 07:34:41.615643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:17.447 [2024-11-20 07:34:41.615664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:17.447 [2024-11-20 07:34:41.615676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615686] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:17.447 [2024-11-20 07:34:41.615698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:17.447 [2024-11-20 07:34:41.615709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:17.447 [2024-11-20 07:34:41.615732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:17.447 [2024-11-20 07:34:41.615743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:17.447 [2024-11-20 07:34:41.615754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:17.447 [2024-11-20 07:34:41.615765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:17.447 [2024-11-20 07:34:41.615775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:17.447 [2024-11-20 07:34:41.615786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:17.447 [2024-11-20 07:34:41.615801] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:17.447 [2024-11-20 07:34:41.615828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:17.447 [2024-11-20 07:34:41.615843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:17.447 [2024-11-20 07:34:41.615855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:17.447 [2024-11-20 07:34:41.615868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:17.447 [2024-11-20 07:34:41.615880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:17.447 [2024-11-20 07:34:41.615892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:17.447 [2024-11-20 07:34:41.615904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:17.447 [2024-11-20 07:34:41.615917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:17.447 [2024-11-20 07:34:41.615930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:17.447 [2024-11-20 07:34:41.615942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:17.447 [2024-11-20 07:34:41.615954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:17.447 [2024-11-20 07:34:41.615966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:17.447 [2024-11-20 07:34:41.615977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:17.447 [2024-11-20 07:34:41.615989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:17.447 [2024-11-20 07:34:41.616001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:17.447 [2024-11-20 07:34:41.616013] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:17.447 [2024-11-20 07:34:41.616030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:17.447 [2024-11-20 07:34:41.616043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:17.447 [2024-11-20 07:34:41.616056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:17.447 [2024-11-20 07:34:41.616068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:17.448 [2024-11-20 07:34:41.616080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:17.448 [2024-11-20 07:34:41.616093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.448 [2024-11-20 07:34:41.616106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:17.448 [2024-11-20 07:34:41.616117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:37:17.448 [2024-11-20 07:34:41.616129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.660034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.660094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:17.706 [2024-11-20 07:34:41.660110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.845 ms 00:37:17.706 [2024-11-20 07:34:41.660123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.660230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.660241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:17.706 [2024-11-20 07:34:41.660253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:37:17.706 [2024-11-20 07:34:41.660264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.721269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.721334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:17.706 [2024-11-20 07:34:41.721351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.918 ms 00:37:17.706 [2024-11-20 07:34:41.721363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.721431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.721444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:17.706 [2024-11-20 07:34:41.721456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:17.706 [2024-11-20 07:34:41.721472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.722036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.722062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:17.706 [2024-11-20 07:34:41.722075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:37:17.706 [2024-11-20 07:34:41.722087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.722235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.722251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:17.706 [2024-11-20 07:34:41.722263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:37:17.706 [2024-11-20 07:34:41.722283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.742763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.742838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:17.706 [2024-11-20 07:34:41.742859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.454 ms 00:37:17.706 [2024-11-20 07:34:41.742869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.763379] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:37:17.706 [2024-11-20 07:34:41.763429] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:37:17.706 [2024-11-20 07:34:41.763446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.763474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:37:17.706 [2024-11-20 07:34:41.763487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.408 ms 00:37:17.706 [2024-11-20 07:34:41.763498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.796399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.796458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:37:17.706 [2024-11-20 07:34:41.796473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.850 ms 00:37:17.706 [2024-11-20 07:34:41.796484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.816645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.816709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:37:17.706 [2024-11-20 07:34:41.816724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.108 ms 00:37:17.706 [2024-11-20 07:34:41.816736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.836827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.836876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:37:17.706 [2024-11-20 07:34:41.836892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.045 ms 00:37:17.706 [2024-11-20 07:34:41.836903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.706 [2024-11-20 07:34:41.837793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.706 [2024-11-20 07:34:41.837847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:17.706 [2024-11-20 07:34:41.837863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:37:17.706 [2024-11-20 07:34:41.837880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.932676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.965 [2024-11-20 07:34:41.932753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:37:17.965 [2024-11-20 07:34:41.932778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.768 ms 00:37:17.965 [2024-11-20 07:34:41.932790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.945839] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:37:17.965 [2024-11-20 07:34:41.949287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.965 [2024-11-20 07:34:41.949328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:17.965 [2024-11-20 07:34:41.949345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.412 ms 00:37:17.965 [2024-11-20 07:34:41.949357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.949494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.965 [2024-11-20 07:34:41.949509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:37:17.965 [2024-11-20 07:34:41.949522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:37:17.965 [2024-11-20 07:34:41.949538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.951317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.965 [2024-11-20 07:34:41.951379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:17.965 [2024-11-20 07:34:41.951411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:37:17.965 [2024-11-20 07:34:41.951424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.951469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.965 [2024-11-20 07:34:41.951483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:17.965 [2024-11-20 07:34:41.951495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:17.965 [2024-11-20 07:34:41.951507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.951549] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:37:17.965 [2024-11-20 07:34:41.951569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.965 [2024-11-20 07:34:41.951581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:37:17.965 [2024-11-20 07:34:41.951593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:37:17.965 [2024-11-20 07:34:41.951605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.992303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.965 [2024-11-20 07:34:41.992377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:17.965 [2024-11-20 07:34:41.992395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.674 ms 00:37:17.965 [2024-11-20 07:34:41.992416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.992526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:17.965 [2024-11-20 07:34:41.992540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:17.965 [2024-11-20 07:34:41.992552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:37:17.965 [2024-11-20 07:34:41.992563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:17.965 [2024-11-20 07:34:41.993756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.459 ms, result 0 00:37:19.341  [2024-11-20T07:34:44.537Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-20T07:34:45.474Z] Copying: 63/1024 [MB] (32 MBps) [2024-11-20T07:34:46.410Z] Copying: 96/1024 [MB] (33 MBps) [2024-11-20T07:34:47.346Z] Copying: 130/1024 [MB] (34 MBps) [2024-11-20T07:34:48.282Z] Copying: 165/1024 [MB] (34 MBps) [2024-11-20T07:34:49.656Z] Copying: 200/1024 [MB] (34 MBps) [2024-11-20T07:34:50.590Z] Copying: 233/1024 [MB] (33 MBps) [2024-11-20T07:34:51.531Z] Copying: 267/1024 [MB] (33 MBps) [2024-11-20T07:34:52.468Z] Copying: 302/1024 [MB] (34 MBps) [2024-11-20T07:34:53.406Z] Copying: 336/1024 [MB] (34 MBps) [2024-11-20T07:34:54.343Z] Copying: 370/1024 [MB] (33 MBps) [2024-11-20T07:34:55.278Z] Copying: 401/1024 [MB] (30 MBps) [2024-11-20T07:34:56.654Z] Copying: 434/1024 [MB] (33 MBps) [2024-11-20T07:34:57.628Z] Copying: 466/1024 [MB] (32 MBps) [2024-11-20T07:34:58.568Z] Copying: 499/1024 [MB] (32 MBps) [2024-11-20T07:34:59.503Z] Copying: 532/1024 [MB] (32 MBps) [2024-11-20T07:35:00.438Z] Copying: 565/1024 [MB] (33 MBps) [2024-11-20T07:35:01.376Z] Copying: 598/1024 [MB] (32 MBps) [2024-11-20T07:35:02.314Z] Copying: 628/1024 [MB] (29 MBps) [2024-11-20T07:35:03.249Z] Copying: 659/1024 [MB] (31 MBps) [2024-11-20T07:35:04.625Z] Copying: 692/1024 [MB] (33 MBps) [2024-11-20T07:35:05.591Z] Copying: 726/1024 [MB] (33 MBps) [2024-11-20T07:35:06.535Z] Copying: 760/1024 [MB] (33 MBps) [2024-11-20T07:35:07.472Z] Copying: 793/1024 [MB] (32 MBps) [2024-11-20T07:35:08.409Z] Copying: 824/1024 [MB] (31 MBps) [2024-11-20T07:35:09.354Z] Copying: 857/1024 [MB] (32 MBps) [2024-11-20T07:35:10.293Z] Copying: 889/1024 [MB] (32 MBps) [2024-11-20T07:35:11.726Z] Copying: 922/1024 [MB] (32 MBps) [2024-11-20T07:35:12.320Z] Copying: 952/1024 [MB] (30 MBps) [2024-11-20T07:35:13.257Z] Copying: 985/1024 [MB] (32 MBps) [2024-11-20T07:35:13.516Z] Copying: 1017/1024 [MB] (31 MBps) [2024-11-20T07:35:13.775Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 07:35:13.525865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.572 [2024-11-20 07:35:13.525962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:49.572 [2024-11-20 07:35:13.525988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:49.572 [2024-11-20 07:35:13.526006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.572 [2024-11-20 07:35:13.526059] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:49.572 [2024-11-20 07:35:13.533622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.572 [2024-11-20 07:35:13.533673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:49.572 [2024-11-20 07:35:13.533694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.515 ms 00:37:49.572 [2024-11-20 07:35:13.533711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.572 [2024-11-20 07:35:13.534048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.572 [2024-11-20 07:35:13.534076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:49.572 [2024-11-20 07:35:13.534108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:37:49.572 [2024-11-20 07:35:13.534125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.572 [2024-11-20 07:35:13.537868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.572 [2024-11-20 07:35:13.537908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:49.572 [2024-11-20 07:35:13.537922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.711 ms 00:37:49.572 [2024-11-20 07:35:13.537934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.572 [2024-11-20 07:35:13.543258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.572 [2024-11-20 07:35:13.543296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:49.572 [2024-11-20 07:35:13.543308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.285 ms 00:37:49.572 [2024-11-20 07:35:13.543318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.572 [2024-11-20 07:35:13.581402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.572 [2024-11-20 07:35:13.581452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:49.572 [2024-11-20 07:35:13.581467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.029 ms 00:37:49.572 [2024-11-20 07:35:13.581478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.572 [2024-11-20 07:35:13.603844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.572 [2024-11-20 07:35:13.603896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:49.572 [2024-11-20 07:35:13.603911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.323 ms 00:37:49.572 [2024-11-20 07:35:13.603922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.572 [2024-11-20 07:35:13.708562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.573 [2024-11-20 07:35:13.708632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:49.573 [2024-11-20 07:35:13.708648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.590 ms 00:37:49.573 [2024-11-20 07:35:13.708660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.573 [2024-11-20 07:35:13.746933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.573 [2024-11-20 07:35:13.746984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:49.573 [2024-11-20 07:35:13.746999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.253 ms 00:37:49.573 [2024-11-20 07:35:13.747011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.833 [2024-11-20 07:35:13.783624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.833 [2024-11-20 07:35:13.783675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:49.833 [2024-11-20 07:35:13.783705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.571 ms 00:37:49.833 [2024-11-20 07:35:13.783716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.833 [2024-11-20 07:35:13.820642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.833 [2024-11-20 07:35:13.820684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:49.833 [2024-11-20 07:35:13.820698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.883 ms 00:37:49.833 [2024-11-20 07:35:13.820708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.833 [2024-11-20 07:35:13.857146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.833 [2024-11-20 07:35:13.857188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:49.833 [2024-11-20 07:35:13.857203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.343 ms 00:37:49.834 [2024-11-20 07:35:13.857212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.834 [2024-11-20 07:35:13.857252] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:49.834 [2024-11-20 07:35:13.857269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:37:49.834 [2024-11-20 07:35:13.857283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.857992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:49.834 [2024-11-20 07:35:13.858227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:49.835 [2024-11-20 07:35:13.858395] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:49.835 [2024-11-20 07:35:13.858405] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a6538a1-e573-4c8a-9e5f-4aff796a6df9 00:37:49.835 [2024-11-20 07:35:13.858416] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:37:49.835 [2024-11-20 07:35:13.858426] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 11968 00:37:49.835 [2024-11-20 07:35:13.858436] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11008 00:37:49.835 [2024-11-20 07:35:13.858447] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0872 00:37:49.835 [2024-11-20 07:35:13.858457] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:49.835 [2024-11-20 07:35:13.858472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:49.835 [2024-11-20 07:35:13.858482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:49.835 [2024-11-20 07:35:13.858503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:49.835 [2024-11-20 07:35:13.858513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:49.835 [2024-11-20 07:35:13.858523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.835 [2024-11-20 07:35:13.858533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:49.835 [2024-11-20 07:35:13.858544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:37:49.835 [2024-11-20 07:35:13.858554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.835 [2024-11-20 07:35:13.879392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.835 [2024-11-20 07:35:13.879431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:49.835 [2024-11-20 07:35:13.879445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.784 ms 00:37:49.835 [2024-11-20 07:35:13.879461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.835 [2024-11-20 07:35:13.880064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:49.835 [2024-11-20 07:35:13.880086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:49.835 [2024-11-20 07:35:13.880098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:37:49.835 [2024-11-20 07:35:13.880108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.835 [2024-11-20 07:35:13.932854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:49.835 [2024-11-20 07:35:13.932897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:49.835 [2024-11-20 07:35:13.932917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:49.835 [2024-11-20 07:35:13.932927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.835 [2024-11-20 07:35:13.932984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:49.835 [2024-11-20 07:35:13.932995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:49.835 [2024-11-20 07:35:13.933006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:49.835 [2024-11-20 07:35:13.933015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.835 [2024-11-20 07:35:13.933084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:49.835 [2024-11-20 07:35:13.933098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:49.835 [2024-11-20 07:35:13.933108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:49.835 [2024-11-20 07:35:13.933123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:49.835 [2024-11-20 07:35:13.933146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:49.835 [2024-11-20 07:35:13.933156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:49.835 [2024-11-20 07:35:13.933167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:49.835 [2024-11-20 07:35:13.933177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.095 [2024-11-20 07:35:14.061435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:50.095 [2024-11-20 07:35:14.061504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:50.095 [2024-11-20 07:35:14.061527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:50.095 [2024-11-20 07:35:14.061538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.095 [2024-11-20 07:35:14.169129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:50.095 [2024-11-20 07:35:14.169200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:50.095 [2024-11-20 07:35:14.169216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:50.095 [2024-11-20 07:35:14.169227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.095 [2024-11-20 07:35:14.169328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:50.095 [2024-11-20 07:35:14.169340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:50.095 [2024-11-20 07:35:14.169353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:50.095 [2024-11-20 07:35:14.169363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.095 [2024-11-20 07:35:14.169416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:50.095 [2024-11-20 07:35:14.169427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:50.095 [2024-11-20 07:35:14.169438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:50.095 [2024-11-20 07:35:14.169448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.095 [2024-11-20 07:35:14.169576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:50.095 [2024-11-20 07:35:14.169591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:50.095 [2024-11-20 07:35:14.169603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:50.095 [2024-11-20 07:35:14.169614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.095 [2024-11-20 07:35:14.169657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:50.095 [2024-11-20 07:35:14.169671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:50.095 [2024-11-20 07:35:14.169683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:50.095 [2024-11-20 07:35:14.169694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.095 [2024-11-20 07:35:14.169734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:50.095 [2024-11-20 07:35:14.169747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:50.095 [2024-11-20 07:35:14.169758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:50.095 [2024-11-20 07:35:14.169769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.095 [2024-11-20 07:35:14.169819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:50.095 [2024-11-20 07:35:14.169855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:50.095 [2024-11-20 07:35:14.169868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:50.096 [2024-11-20 07:35:14.169878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:50.096 [2024-11-20 07:35:14.170008] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 644.134 ms, result 0 00:37:51.033 00:37:51.033 00:37:51.291 07:35:15 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:53.194 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:53.194 Process with pid 77083 is not found 00:37:53.194 Remove shared memory files 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77083 00:37:53.194 07:35:17 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77083 ']' 00:37:53.194 07:35:17 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77083 00:37:53.194 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77083) - No such process 00:37:53.194 07:35:17 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77083 is not found' 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:53.194 07:35:17 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:37:53.194 ************************************ 00:37:53.194 END TEST ftl_restore 00:37:53.194 ************************************ 00:37:53.194 00:37:53.194 real 2m50.661s 00:37:53.194 user 2m38.571s 00:37:53.194 sys 0m15.224s 00:37:53.194 07:35:17 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:53.194 07:35:17 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:37:53.194 07:35:17 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:37:53.194 07:35:17 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:53.194 07:35:17 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.194 07:35:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:53.194 ************************************ 00:37:53.194 START TEST ftl_dirty_shutdown 00:37:53.194 ************************************ 00:37:53.194 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:37:53.454 * Looking for test storage... 00:37:53.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:53.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.454 --rc genhtml_branch_coverage=1 00:37:53.454 --rc genhtml_function_coverage=1 00:37:53.454 --rc genhtml_legend=1 00:37:53.454 --rc geninfo_all_blocks=1 00:37:53.454 --rc geninfo_unexecuted_blocks=1 00:37:53.454 00:37:53.454 ' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:53.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.454 --rc genhtml_branch_coverage=1 00:37:53.454 --rc genhtml_function_coverage=1 00:37:53.454 --rc genhtml_legend=1 00:37:53.454 --rc geninfo_all_blocks=1 00:37:53.454 --rc geninfo_unexecuted_blocks=1 00:37:53.454 00:37:53.454 ' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:53.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.454 --rc genhtml_branch_coverage=1 00:37:53.454 --rc genhtml_function_coverage=1 00:37:53.454 --rc genhtml_legend=1 00:37:53.454 --rc geninfo_all_blocks=1 00:37:53.454 --rc geninfo_unexecuted_blocks=1 00:37:53.454 00:37:53.454 ' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:53.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.454 --rc genhtml_branch_coverage=1 00:37:53.454 --rc genhtml_function_coverage=1 00:37:53.454 --rc genhtml_legend=1 00:37:53.454 --rc geninfo_all_blocks=1 00:37:53.454 --rc geninfo_unexecuted_blocks=1 00:37:53.454 00:37:53.454 ' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78902 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78902 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78902 ']' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:37:53.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:53.454 07:35:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:53.713 [2024-11-20 07:35:17.665960] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:37:53.713 [2024-11-20 07:35:17.666097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78902 ] 00:37:53.713 [2024-11-20 07:35:17.848251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.973 [2024-11-20 07:35:18.028058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.911 07:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:54.911 07:35:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:54.911 07:35:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:37:54.911 07:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:37:54.911 07:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:54.911 07:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:37:54.911 07:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:37:54.911 07:35:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:55.169 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:37:55.169 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:37:55.169 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:37:55.169 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:37:55.169 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:55.169 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:55.169 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:55.169 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:55.428 { 00:37:55.428 "name": "nvme0n1", 00:37:55.428 "aliases": [ 00:37:55.428 "52eb0050-7534-40b9-95c9-cc9880f6ec51" 00:37:55.428 ], 00:37:55.428 "product_name": "NVMe disk", 00:37:55.428 "block_size": 4096, 00:37:55.428 "num_blocks": 1310720, 00:37:55.428 "uuid": "52eb0050-7534-40b9-95c9-cc9880f6ec51", 00:37:55.428 "numa_id": -1, 00:37:55.428 "assigned_rate_limits": { 00:37:55.428 "rw_ios_per_sec": 0, 00:37:55.428 "rw_mbytes_per_sec": 0, 00:37:55.428 "r_mbytes_per_sec": 0, 00:37:55.428 "w_mbytes_per_sec": 0 00:37:55.428 }, 00:37:55.428 "claimed": true, 00:37:55.428 "claim_type": "read_many_write_one", 00:37:55.428 "zoned": false, 00:37:55.428 "supported_io_types": { 00:37:55.428 "read": true, 00:37:55.428 "write": true, 00:37:55.428 "unmap": true, 00:37:55.428 "flush": true, 00:37:55.428 "reset": true, 00:37:55.428 "nvme_admin": true, 00:37:55.428 "nvme_io": true, 00:37:55.428 "nvme_io_md": false, 00:37:55.428 "write_zeroes": true, 00:37:55.428 "zcopy": false, 00:37:55.428 "get_zone_info": false, 00:37:55.428 "zone_management": false, 00:37:55.428 "zone_append": false, 00:37:55.428 "compare": true, 00:37:55.428 "compare_and_write": false, 00:37:55.428 "abort": true, 00:37:55.428 "seek_hole": false, 00:37:55.428 "seek_data": false, 00:37:55.428 "copy": true, 00:37:55.428 "nvme_iov_md": false 00:37:55.428 }, 00:37:55.428 "driver_specific": { 00:37:55.428 "nvme": [ 00:37:55.428 { 00:37:55.428 "pci_address": "0000:00:11.0", 00:37:55.428 "trid": { 00:37:55.428 "trtype": "PCIe", 00:37:55.428 "traddr": "0000:00:11.0" 00:37:55.428 }, 00:37:55.428 "ctrlr_data": { 00:37:55.428 "cntlid": 0, 00:37:55.428 "vendor_id": "0x1b36", 00:37:55.428 "model_number": "QEMU NVMe Ctrl", 00:37:55.428 "serial_number": "12341", 00:37:55.428 "firmware_revision": "8.0.0", 00:37:55.428 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:55.428 "oacs": { 00:37:55.428 "security": 0, 00:37:55.428 "format": 1, 00:37:55.428 "firmware": 0, 00:37:55.428 "ns_manage": 1 00:37:55.428 }, 00:37:55.428 "multi_ctrlr": false, 00:37:55.428 "ana_reporting": false 00:37:55.428 }, 00:37:55.428 "vs": { 00:37:55.428 "nvme_version": "1.4" 00:37:55.428 }, 00:37:55.428 "ns_data": { 00:37:55.428 "id": 1, 00:37:55.428 "can_share": false 00:37:55.428 } 00:37:55.428 } 00:37:55.428 ], 00:37:55.428 "mp_policy": "active_passive" 00:37:55.428 } 00:37:55.428 } 00:37:55.428 ]' 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:55.428 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:55.687 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=0a7b021e-9f1d-4924-bb47-fb0137ac838a 00:37:55.687 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:37:55.687 07:35:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a7b021e-9f1d-4924-bb47-fb0137ac838a 00:37:56.255 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:37:56.255 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e8d9f660-54f1-4000-8d08-889fab77b361 00:37:56.255 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e8d9f660-54f1-4000-8d08-889fab77b361 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:56.514 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:56.772 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:56.772 { 00:37:56.772 "name": "4b998565-49dd-49de-a960-d3405ed2cd3a", 00:37:56.772 "aliases": [ 00:37:56.772 "lvs/nvme0n1p0" 00:37:56.772 ], 00:37:56.772 "product_name": "Logical Volume", 00:37:56.772 "block_size": 4096, 00:37:56.772 "num_blocks": 26476544, 00:37:56.772 "uuid": "4b998565-49dd-49de-a960-d3405ed2cd3a", 00:37:56.772 "assigned_rate_limits": { 00:37:56.772 "rw_ios_per_sec": 0, 00:37:56.772 "rw_mbytes_per_sec": 0, 00:37:56.772 "r_mbytes_per_sec": 0, 00:37:56.772 "w_mbytes_per_sec": 0 00:37:56.772 }, 00:37:56.772 "claimed": false, 00:37:56.772 "zoned": false, 00:37:56.772 "supported_io_types": { 00:37:56.772 "read": true, 00:37:56.772 "write": true, 00:37:56.772 "unmap": true, 00:37:56.772 "flush": false, 00:37:56.772 "reset": true, 00:37:56.772 "nvme_admin": false, 00:37:56.772 "nvme_io": false, 00:37:56.772 "nvme_io_md": false, 00:37:56.772 "write_zeroes": true, 00:37:56.772 "zcopy": false, 00:37:56.772 "get_zone_info": false, 00:37:56.772 "zone_management": false, 00:37:56.772 "zone_append": false, 00:37:56.772 "compare": false, 00:37:56.772 "compare_and_write": false, 00:37:56.772 "abort": false, 00:37:56.772 "seek_hole": true, 00:37:56.772 "seek_data": true, 00:37:56.772 "copy": false, 00:37:56.772 "nvme_iov_md": false 00:37:56.772 }, 00:37:56.772 "driver_specific": { 00:37:56.772 "lvol": { 00:37:56.772 "lvol_store_uuid": "e8d9f660-54f1-4000-8d08-889fab77b361", 00:37:56.772 "base_bdev": "nvme0n1", 00:37:56.772 "thin_provision": true, 00:37:56.772 "num_allocated_clusters": 0, 00:37:56.772 "snapshot": false, 00:37:56.772 "clone": false, 00:37:56.772 "esnap_clone": false 00:37:56.772 } 00:37:56.772 } 00:37:56.772 } 00:37:56.772 ]' 00:37:56.772 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:56.772 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:56.772 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:56.772 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:56.772 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:56.772 07:35:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:37:56.773 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:37:56.773 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:37:56.773 07:35:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:57.341 { 00:37:57.341 "name": "4b998565-49dd-49de-a960-d3405ed2cd3a", 00:37:57.341 "aliases": [ 00:37:57.341 "lvs/nvme0n1p0" 00:37:57.341 ], 00:37:57.341 "product_name": "Logical Volume", 00:37:57.341 "block_size": 4096, 00:37:57.341 "num_blocks": 26476544, 00:37:57.341 "uuid": "4b998565-49dd-49de-a960-d3405ed2cd3a", 00:37:57.341 "assigned_rate_limits": { 00:37:57.341 "rw_ios_per_sec": 0, 00:37:57.341 "rw_mbytes_per_sec": 0, 00:37:57.341 "r_mbytes_per_sec": 0, 00:37:57.341 "w_mbytes_per_sec": 0 00:37:57.341 }, 00:37:57.341 "claimed": false, 00:37:57.341 "zoned": false, 00:37:57.341 "supported_io_types": { 00:37:57.341 "read": true, 00:37:57.341 "write": true, 00:37:57.341 "unmap": true, 00:37:57.341 "flush": false, 00:37:57.341 "reset": true, 00:37:57.341 "nvme_admin": false, 00:37:57.341 "nvme_io": false, 00:37:57.341 "nvme_io_md": false, 00:37:57.341 "write_zeroes": true, 00:37:57.341 "zcopy": false, 00:37:57.341 "get_zone_info": false, 00:37:57.341 "zone_management": false, 00:37:57.341 "zone_append": false, 00:37:57.341 "compare": false, 00:37:57.341 "compare_and_write": false, 00:37:57.341 "abort": false, 00:37:57.341 "seek_hole": true, 00:37:57.341 "seek_data": true, 00:37:57.341 "copy": false, 00:37:57.341 "nvme_iov_md": false 00:37:57.341 }, 00:37:57.341 "driver_specific": { 00:37:57.341 "lvol": { 00:37:57.341 "lvol_store_uuid": "e8d9f660-54f1-4000-8d08-889fab77b361", 00:37:57.341 "base_bdev": "nvme0n1", 00:37:57.341 "thin_provision": true, 00:37:57.341 "num_allocated_clusters": 0, 00:37:57.341 "snapshot": false, 00:37:57.341 "clone": false, 00:37:57.341 "esnap_clone": false 00:37:57.341 } 00:37:57.341 } 00:37:57.341 } 00:37:57.341 ]' 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:37:57.341 07:35:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:37:57.600 07:35:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:37:57.600 07:35:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:57.600 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:57.600 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:57.600 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:57.600 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:57.600 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4b998565-49dd-49de-a960-d3405ed2cd3a 00:37:57.860 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:57.860 { 00:37:57.860 "name": "4b998565-49dd-49de-a960-d3405ed2cd3a", 00:37:57.860 "aliases": [ 00:37:57.860 "lvs/nvme0n1p0" 00:37:57.860 ], 00:37:57.860 "product_name": "Logical Volume", 00:37:57.860 "block_size": 4096, 00:37:57.860 "num_blocks": 26476544, 00:37:57.860 "uuid": "4b998565-49dd-49de-a960-d3405ed2cd3a", 00:37:57.860 "assigned_rate_limits": { 00:37:57.860 "rw_ios_per_sec": 0, 00:37:57.860 "rw_mbytes_per_sec": 0, 00:37:57.860 "r_mbytes_per_sec": 0, 00:37:57.860 "w_mbytes_per_sec": 0 00:37:57.860 }, 00:37:57.860 "claimed": false, 00:37:57.860 "zoned": false, 00:37:57.860 "supported_io_types": { 00:37:57.860 "read": true, 00:37:57.860 "write": true, 00:37:57.860 "unmap": true, 00:37:57.860 "flush": false, 00:37:57.860 "reset": true, 00:37:57.860 "nvme_admin": false, 00:37:57.860 "nvme_io": false, 00:37:57.860 "nvme_io_md": false, 00:37:57.860 "write_zeroes": true, 00:37:57.860 "zcopy": false, 00:37:57.860 "get_zone_info": false, 00:37:57.860 "zone_management": false, 00:37:57.860 "zone_append": false, 00:37:57.860 "compare": false, 00:37:57.860 "compare_and_write": false, 00:37:57.860 "abort": false, 00:37:57.860 "seek_hole": true, 00:37:57.860 "seek_data": true, 00:37:57.860 "copy": false, 00:37:57.860 "nvme_iov_md": false 00:37:57.860 }, 00:37:57.860 "driver_specific": { 00:37:57.860 "lvol": { 00:37:57.860 "lvol_store_uuid": "e8d9f660-54f1-4000-8d08-889fab77b361", 00:37:57.860 "base_bdev": "nvme0n1", 00:37:57.860 "thin_provision": true, 00:37:57.860 "num_allocated_clusters": 0, 00:37:57.860 "snapshot": false, 00:37:57.860 "clone": false, 00:37:57.860 "esnap_clone": false 00:37:57.860 } 00:37:57.860 } 00:37:57.860 } 00:37:57.860 ]' 00:37:57.860 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:57.860 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:57.860 07:35:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:57.860 07:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:57.861 07:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:57.861 07:35:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:37:57.861 07:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:37:57.861 07:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4b998565-49dd-49de-a960-d3405ed2cd3a --l2p_dram_limit 10' 00:37:57.861 07:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:37:57.861 07:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:37:57.861 07:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:37:57.861 07:35:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4b998565-49dd-49de-a960-d3405ed2cd3a --l2p_dram_limit 10 -c nvc0n1p0 00:37:58.121 [2024-11-20 07:35:22.296724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.121 [2024-11-20 07:35:22.296780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:58.121 [2024-11-20 07:35:22.296800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:58.121 [2024-11-20 07:35:22.296811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.121 [2024-11-20 07:35:22.296894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.121 [2024-11-20 07:35:22.296906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:58.121 [2024-11-20 07:35:22.296920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:37:58.121 [2024-11-20 07:35:22.296931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.121 [2024-11-20 07:35:22.296962] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:58.121 [2024-11-20 07:35:22.298068] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:58.121 [2024-11-20 07:35:22.298108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.121 [2024-11-20 07:35:22.298119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:58.121 [2024-11-20 07:35:22.298134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:37:58.121 [2024-11-20 07:35:22.298145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.121 [2024-11-20 07:35:22.298233] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2a0b8524-eff8-4987-9ed1-fbd226a5ac54 00:37:58.121 [2024-11-20 07:35:22.299680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.122 [2024-11-20 07:35:22.299711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:37:58.122 [2024-11-20 07:35:22.299723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:37:58.122 [2024-11-20 07:35:22.299738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.122 [2024-11-20 07:35:22.307305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.122 [2024-11-20 07:35:22.307342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:58.122 [2024-11-20 07:35:22.307358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.522 ms 00:37:58.122 [2024-11-20 07:35:22.307371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.122 [2024-11-20 07:35:22.307474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.122 [2024-11-20 07:35:22.307490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:58.122 [2024-11-20 07:35:22.307502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:37:58.122 [2024-11-20 07:35:22.307519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.122 [2024-11-20 07:35:22.307595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.122 [2024-11-20 07:35:22.307611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:58.122 [2024-11-20 07:35:22.307622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:37:58.122 [2024-11-20 07:35:22.307638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.122 [2024-11-20 07:35:22.307664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:58.122 [2024-11-20 07:35:22.312897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.122 [2024-11-20 07:35:22.312930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:58.122 [2024-11-20 07:35:22.312947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.237 ms 00:37:58.122 [2024-11-20 07:35:22.312957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.122 [2024-11-20 07:35:22.312994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.122 [2024-11-20 07:35:22.313005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:58.122 [2024-11-20 07:35:22.313018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:37:58.122 [2024-11-20 07:35:22.313028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.122 [2024-11-20 07:35:22.313067] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:37:58.122 [2024-11-20 07:35:22.313198] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:58.122 [2024-11-20 07:35:22.313218] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:58.122 [2024-11-20 07:35:22.313232] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:58.122 [2024-11-20 07:35:22.313247] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313260] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313273] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:58.122 [2024-11-20 07:35:22.313284] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:58.122 [2024-11-20 07:35:22.313299] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:58.122 [2024-11-20 07:35:22.313309] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:58.122 [2024-11-20 07:35:22.313322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.122 [2024-11-20 07:35:22.313332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:58.122 [2024-11-20 07:35:22.313345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:37:58.122 [2024-11-20 07:35:22.313365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.122 [2024-11-20 07:35:22.313447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.122 [2024-11-20 07:35:22.313462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:58.122 [2024-11-20 07:35:22.313475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:37:58.122 [2024-11-20 07:35:22.313484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.122 [2024-11-20 07:35:22.313584] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:58.122 [2024-11-20 07:35:22.313598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:58.122 [2024-11-20 07:35:22.313611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:58.122 [2024-11-20 07:35:22.313644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:58.122 [2024-11-20 07:35:22.313677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:58.122 [2024-11-20 07:35:22.313698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:58.122 [2024-11-20 07:35:22.313708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:58.122 [2024-11-20 07:35:22.313722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:58.122 [2024-11-20 07:35:22.313732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:58.122 [2024-11-20 07:35:22.313744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:58.122 [2024-11-20 07:35:22.313753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:58.122 [2024-11-20 07:35:22.313776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:58.122 [2024-11-20 07:35:22.313811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:58.122 [2024-11-20 07:35:22.313853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:58.122 [2024-11-20 07:35:22.313886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:58.122 [2024-11-20 07:35:22.313917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:58.122 [2024-11-20 07:35:22.313938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:58.122 [2024-11-20 07:35:22.313952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:58.122 [2024-11-20 07:35:22.313962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:58.122 [2024-11-20 07:35:22.313974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:58.122 [2024-11-20 07:35:22.313984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:58.122 [2024-11-20 07:35:22.313995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:58.122 [2024-11-20 07:35:22.314005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:58.122 [2024-11-20 07:35:22.314016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:58.122 [2024-11-20 07:35:22.314025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:58.122 [2024-11-20 07:35:22.314037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:58.122 [2024-11-20 07:35:22.314046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:58.122 [2024-11-20 07:35:22.314058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:58.122 [2024-11-20 07:35:22.314067] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:58.122 [2024-11-20 07:35:22.314089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:58.122 [2024-11-20 07:35:22.314099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:58.122 [2024-11-20 07:35:22.314114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:58.122 [2024-11-20 07:35:22.314124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:58.122 [2024-11-20 07:35:22.314139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:58.122 [2024-11-20 07:35:22.314149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:58.122 [2024-11-20 07:35:22.314161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:58.122 [2024-11-20 07:35:22.314170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:58.122 [2024-11-20 07:35:22.314182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:58.122 [2024-11-20 07:35:22.314197] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:58.122 [2024-11-20 07:35:22.314212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:58.122 [2024-11-20 07:35:22.314227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:58.122 [2024-11-20 07:35:22.314240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:58.122 [2024-11-20 07:35:22.314250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:58.122 [2024-11-20 07:35:22.314264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:58.123 [2024-11-20 07:35:22.314275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:58.123 [2024-11-20 07:35:22.314288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:58.123 [2024-11-20 07:35:22.314298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:58.123 [2024-11-20 07:35:22.314311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:58.123 [2024-11-20 07:35:22.314321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:58.123 [2024-11-20 07:35:22.314337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:58.123 [2024-11-20 07:35:22.314348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:58.123 [2024-11-20 07:35:22.314360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:58.123 [2024-11-20 07:35:22.314371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:58.123 [2024-11-20 07:35:22.314385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:58.123 [2024-11-20 07:35:22.314395] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:58.123 [2024-11-20 07:35:22.314409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:58.123 [2024-11-20 07:35:22.314420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:58.123 [2024-11-20 07:35:22.314434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:58.123 [2024-11-20 07:35:22.314444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:58.123 [2024-11-20 07:35:22.314457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:58.123 [2024-11-20 07:35:22.314468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:58.123 [2024-11-20 07:35:22.314482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:58.123 [2024-11-20 07:35:22.314493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:37:58.123 [2024-11-20 07:35:22.314505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:58.123 [2024-11-20 07:35:22.314549] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:37:58.123 [2024-11-20 07:35:22.314567] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:00.661 [2024-11-20 07:35:24.846380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.661 [2024-11-20 07:35:24.846449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:00.661 [2024-11-20 07:35:24.846467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2531.816 ms 00:38:00.661 [2024-11-20 07:35:24.846481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:24.886567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:24.886622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:00.921 [2024-11-20 07:35:24.886639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.766 ms 00:38:00.921 [2024-11-20 07:35:24.886652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:24.886827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:24.886845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:00.921 [2024-11-20 07:35:24.886857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:38:00.921 [2024-11-20 07:35:24.886873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:24.934655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:24.934708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:00.921 [2024-11-20 07:35:24.934723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.714 ms 00:38:00.921 [2024-11-20 07:35:24.934737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:24.934789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:24.934808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:00.921 [2024-11-20 07:35:24.934828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:00.921 [2024-11-20 07:35:24.934842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:24.935341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:24.935361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:00.921 [2024-11-20 07:35:24.935373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:38:00.921 [2024-11-20 07:35:24.935386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:24.935494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:24.935507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:00.921 [2024-11-20 07:35:24.935521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:38:00.921 [2024-11-20 07:35:24.935538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:24.956196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:24.956246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:00.921 [2024-11-20 07:35:24.956261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.635 ms 00:38:00.921 [2024-11-20 07:35:24.956275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:24.968500] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:00.921 [2024-11-20 07:35:24.971798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:24.971832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:00.921 [2024-11-20 07:35:24.971848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.407 ms 00:38:00.921 [2024-11-20 07:35:24.971859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:25.054364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:25.054426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:00.921 [2024-11-20 07:35:25.054447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.453 ms 00:38:00.921 [2024-11-20 07:35:25.054458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:25.054642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:25.054658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:00.921 [2024-11-20 07:35:25.054676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:38:00.921 [2024-11-20 07:35:25.054686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:00.921 [2024-11-20 07:35:25.092349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:00.921 [2024-11-20 07:35:25.092399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:00.921 [2024-11-20 07:35:25.092418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.601 ms 00:38:00.921 [2024-11-20 07:35:25.092430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.130587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:01.180 [2024-11-20 07:35:25.130633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:01.180 [2024-11-20 07:35:25.130669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.116 ms 00:38:01.180 [2024-11-20 07:35:25.130680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.131413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:01.180 [2024-11-20 07:35:25.131433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:01.180 [2024-11-20 07:35:25.131447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:38:01.180 [2024-11-20 07:35:25.131458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.234546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:01.180 [2024-11-20 07:35:25.234604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:01.180 [2024-11-20 07:35:25.234635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.017 ms 00:38:01.180 [2024-11-20 07:35:25.234646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.273763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:01.180 [2024-11-20 07:35:25.273843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:01.180 [2024-11-20 07:35:25.273863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.018 ms 00:38:01.180 [2024-11-20 07:35:25.273874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.311212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:01.180 [2024-11-20 07:35:25.311258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:01.180 [2024-11-20 07:35:25.311276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.284 ms 00:38:01.180 [2024-11-20 07:35:25.311287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.348657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:01.180 [2024-11-20 07:35:25.348699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:01.180 [2024-11-20 07:35:25.348718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.318 ms 00:38:01.180 [2024-11-20 07:35:25.348728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.348777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:01.180 [2024-11-20 07:35:25.348790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:01.180 [2024-11-20 07:35:25.348807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:01.180 [2024-11-20 07:35:25.348827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.348935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:01.180 [2024-11-20 07:35:25.348948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:01.180 [2024-11-20 07:35:25.348964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:38:01.180 [2024-11-20 07:35:25.348974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:01.180 [2024-11-20 07:35:25.350069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3052.841 ms, result 0 00:38:01.180 { 00:38:01.180 "name": "ftl0", 00:38:01.180 "uuid": "2a0b8524-eff8-4987-9ed1-fbd226a5ac54" 00:38:01.180 } 00:38:01.180 07:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:38:01.180 07:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:38:01.439 07:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:38:01.439 07:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:38:01.439 07:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:38:02.008 /dev/nbd0 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:38:02.008 1+0 records in 00:38:02.008 1+0 records out 00:38:02.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350644 s, 11.7 MB/s 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:38:02.008 07:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:38:02.008 [2024-11-20 07:35:26.062356] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:38:02.008 [2024-11-20 07:35:26.062529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79044 ] 00:38:02.268 [2024-11-20 07:35:26.264745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.268 [2024-11-20 07:35:26.435657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.648  [2024-11-20T07:35:28.788Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-20T07:35:30.165Z] Copying: 383/1024 [MB] (189 MBps) [2024-11-20T07:35:31.099Z] Copying: 565/1024 [MB] (182 MBps) [2024-11-20T07:35:32.033Z] Copying: 753/1024 [MB] (187 MBps) [2024-11-20T07:35:32.292Z] Copying: 937/1024 [MB] (183 MBps) [2024-11-20T07:35:33.669Z] Copying: 1024/1024 [MB] (average 185 MBps) 00:38:09.466 00:38:09.466 07:35:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:38:11.387 07:35:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:38:11.387 [2024-11-20 07:35:35.501370] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:38:11.387 [2024-11-20 07:35:35.501547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79142 ] 00:38:11.670 [2024-11-20 07:35:35.703231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.670 [2024-11-20 07:35:35.821376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.047  [2024-11-20T07:35:38.186Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-20T07:35:39.564Z] Copying: 36/1024 [MB] (17 MBps) [2024-11-20T07:35:40.500Z] Copying: 54/1024 [MB] (18 MBps) [2024-11-20T07:35:41.437Z] Copying: 73/1024 [MB] (18 MBps) [2024-11-20T07:35:42.379Z] Copying: 92/1024 [MB] (19 MBps) [2024-11-20T07:35:43.331Z] Copying: 111/1024 [MB] (18 MBps) [2024-11-20T07:35:44.266Z] Copying: 130/1024 [MB] (19 MBps) [2024-11-20T07:35:45.201Z] Copying: 149/1024 [MB] (19 MBps) [2024-11-20T07:35:46.601Z] Copying: 167/1024 [MB] (18 MBps) [2024-11-20T07:35:47.167Z] Copying: 186/1024 [MB] (18 MBps) [2024-11-20T07:35:48.540Z] Copying: 205/1024 [MB] (18 MBps) [2024-11-20T07:35:49.473Z] Copying: 223/1024 [MB] (18 MBps) [2024-11-20T07:35:50.407Z] Copying: 242/1024 [MB] (18 MBps) [2024-11-20T07:35:51.344Z] Copying: 260/1024 [MB] (18 MBps) [2024-11-20T07:35:52.280Z] Copying: 279/1024 [MB] (18 MBps) [2024-11-20T07:35:53.214Z] Copying: 298/1024 [MB] (18 MBps) [2024-11-20T07:35:54.158Z] Copying: 317/1024 [MB] (18 MBps) [2024-11-20T07:35:55.533Z] Copying: 336/1024 [MB] (19 MBps) [2024-11-20T07:35:56.514Z] Copying: 355/1024 [MB] (19 MBps) [2024-11-20T07:35:57.489Z] Copying: 373/1024 [MB] (17 MBps) [2024-11-20T07:35:58.426Z] Copying: 392/1024 [MB] (18 MBps) [2024-11-20T07:35:59.375Z] Copying: 412/1024 [MB] (19 MBps) [2024-11-20T07:36:00.318Z] Copying: 431/1024 [MB] (19 MBps) [2024-11-20T07:36:01.254Z] Copying: 449/1024 [MB] (18 MBps) [2024-11-20T07:36:02.189Z] Copying: 468/1024 [MB] (18 MBps) [2024-11-20T07:36:03.608Z] Copying: 487/1024 [MB] (19 MBps) [2024-11-20T07:36:04.175Z] Copying: 506/1024 [MB] (18 MBps) [2024-11-20T07:36:05.552Z] Copying: 525/1024 [MB] (19 MBps) [2024-11-20T07:36:06.488Z] Copying: 543/1024 [MB] (17 MBps) [2024-11-20T07:36:07.503Z] Copying: 560/1024 [MB] (17 MBps) [2024-11-20T07:36:08.441Z] Copying: 578/1024 [MB] (17 MBps) [2024-11-20T07:36:09.463Z] Copying: 594/1024 [MB] (16 MBps) [2024-11-20T07:36:10.399Z] Copying: 612/1024 [MB] (17 MBps) [2024-11-20T07:36:11.334Z] Copying: 627/1024 [MB] (15 MBps) [2024-11-20T07:36:12.268Z] Copying: 645/1024 [MB] (17 MBps) [2024-11-20T07:36:13.204Z] Copying: 663/1024 [MB] (17 MBps) [2024-11-20T07:36:14.584Z] Copying: 679/1024 [MB] (16 MBps) [2024-11-20T07:36:15.150Z] Copying: 696/1024 [MB] (16 MBps) [2024-11-20T07:36:16.524Z] Copying: 713/1024 [MB] (16 MBps) [2024-11-20T07:36:17.535Z] Copying: 730/1024 [MB] (16 MBps) [2024-11-20T07:36:18.498Z] Copying: 746/1024 [MB] (16 MBps) [2024-11-20T07:36:19.533Z] Copying: 762/1024 [MB] (16 MBps) [2024-11-20T07:36:20.469Z] Copying: 780/1024 [MB] (17 MBps) [2024-11-20T07:36:21.406Z] Copying: 797/1024 [MB] (17 MBps) [2024-11-20T07:36:22.342Z] Copying: 814/1024 [MB] (17 MBps) [2024-11-20T07:36:23.276Z] Copying: 830/1024 [MB] (16 MBps) [2024-11-20T07:36:24.212Z] Copying: 847/1024 [MB] (16 MBps) [2024-11-20T07:36:25.148Z] Copying: 863/1024 [MB] (16 MBps) [2024-11-20T07:36:26.523Z] Copying: 880/1024 [MB] (16 MBps) [2024-11-20T07:36:27.460Z] Copying: 897/1024 [MB] (16 MBps) [2024-11-20T07:36:28.398Z] Copying: 914/1024 [MB] (16 MBps) [2024-11-20T07:36:29.334Z] Copying: 930/1024 [MB] (16 MBps) [2024-11-20T07:36:30.268Z] Copying: 947/1024 [MB] (16 MBps) [2024-11-20T07:36:31.203Z] Copying: 963/1024 [MB] (16 MBps) [2024-11-20T07:36:32.579Z] Copying: 980/1024 [MB] (16 MBps) [2024-11-20T07:36:33.516Z] Copying: 996/1024 [MB] (16 MBps) [2024-11-20T07:36:33.775Z] Copying: 1013/1024 [MB] (16 MBps) [2024-11-20T07:36:35.153Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:39:10.950 00:39:10.950 07:36:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:39:10.950 07:36:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:39:11.209 07:36:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:39:11.469 [2024-11-20 07:36:35.498636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.498732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:11.469 [2024-11-20 07:36:35.498754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:11.469 [2024-11-20 07:36:35.498772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.469 [2024-11-20 07:36:35.498809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:11.469 [2024-11-20 07:36:35.503660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.503722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:11.469 [2024-11-20 07:36:35.503762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.792 ms 00:39:11.469 [2024-11-20 07:36:35.503776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.469 [2024-11-20 07:36:35.506094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.506156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:11.469 [2024-11-20 07:36:35.506180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:39:11.469 [2024-11-20 07:36:35.506196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.469 [2024-11-20 07:36:35.524297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.524383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:11.469 [2024-11-20 07:36:35.524412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.058 ms 00:39:11.469 [2024-11-20 07:36:35.524426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.469 [2024-11-20 07:36:35.530498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.530565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:11.469 [2024-11-20 07:36:35.530589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.986 ms 00:39:11.469 [2024-11-20 07:36:35.530603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.469 [2024-11-20 07:36:35.576677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.576790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:11.469 [2024-11-20 07:36:35.576833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.908 ms 00:39:11.469 [2024-11-20 07:36:35.576848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.469 [2024-11-20 07:36:35.603267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.603380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:11.469 [2024-11-20 07:36:35.603406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.292 ms 00:39:11.469 [2024-11-20 07:36:35.603425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.469 [2024-11-20 07:36:35.603713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.603732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:11.469 [2024-11-20 07:36:35.603750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:39:11.469 [2024-11-20 07:36:35.603765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.469 [2024-11-20 07:36:35.648247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.469 [2024-11-20 07:36:35.648345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:11.469 [2024-11-20 07:36:35.648369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.434 ms 00:39:11.469 [2024-11-20 07:36:35.648400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.730 [2024-11-20 07:36:35.691690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.730 [2024-11-20 07:36:35.691784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:11.730 [2024-11-20 07:36:35.691811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.152 ms 00:39:11.730 [2024-11-20 07:36:35.691836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.730 [2024-11-20 07:36:35.734121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.730 [2024-11-20 07:36:35.734234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:11.730 [2024-11-20 07:36:35.734261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.130 ms 00:39:11.730 [2024-11-20 07:36:35.734276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.730 [2024-11-20 07:36:35.776299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.730 [2024-11-20 07:36:35.776395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:11.730 [2024-11-20 07:36:35.776420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.765 ms 00:39:11.730 [2024-11-20 07:36:35.776450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.730 [2024-11-20 07:36:35.776565] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:11.730 [2024-11-20 07:36:35.776589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.776986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:11.730 [2024-11-20 07:36:35.777455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.777997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:11.731 [2024-11-20 07:36:35.778312] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:11.731 [2024-11-20 07:36:35.778329] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a0b8524-eff8-4987-9ed1-fbd226a5ac54 00:39:11.731 [2024-11-20 07:36:35.778344] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:11.731 [2024-11-20 07:36:35.778363] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:11.731 [2024-11-20 07:36:35.778376] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:11.731 [2024-11-20 07:36:35.778398] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:11.731 [2024-11-20 07:36:35.778411] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:11.731 [2024-11-20 07:36:35.778428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:11.731 [2024-11-20 07:36:35.778441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:11.731 [2024-11-20 07:36:35.778457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:11.731 [2024-11-20 07:36:35.778469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:11.731 [2024-11-20 07:36:35.778486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.731 [2024-11-20 07:36:35.778500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:11.731 [2024-11-20 07:36:35.778518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:39:11.731 [2024-11-20 07:36:35.778531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.731 [2024-11-20 07:36:35.801129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.731 [2024-11-20 07:36:35.801213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:11.731 [2024-11-20 07:36:35.801240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.460 ms 00:39:11.731 [2024-11-20 07:36:35.801273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.731 [2024-11-20 07:36:35.801919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:11.731 [2024-11-20 07:36:35.801948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:11.731 [2024-11-20 07:36:35.801969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:39:11.731 [2024-11-20 07:36:35.801983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.731 [2024-11-20 07:36:35.876781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.731 [2024-11-20 07:36:35.876898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:11.731 [2024-11-20 07:36:35.876921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.731 [2024-11-20 07:36:35.876952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.731 [2024-11-20 07:36:35.877054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.731 [2024-11-20 07:36:35.877069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:11.731 [2024-11-20 07:36:35.877114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.731 [2024-11-20 07:36:35.877127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.731 [2024-11-20 07:36:35.877339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.731 [2024-11-20 07:36:35.877358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:11.731 [2024-11-20 07:36:35.877380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.731 [2024-11-20 07:36:35.877393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.731 [2024-11-20 07:36:35.877426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.731 [2024-11-20 07:36:35.877441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:11.731 [2024-11-20 07:36:35.877458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.731 [2024-11-20 07:36:35.877482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.015582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.992 [2024-11-20 07:36:36.015694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:11.992 [2024-11-20 07:36:36.015719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.992 [2024-11-20 07:36:36.015733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.130963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.992 [2024-11-20 07:36:36.131049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:11.992 [2024-11-20 07:36:36.131072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.992 [2024-11-20 07:36:36.131087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.131255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.992 [2024-11-20 07:36:36.131272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:11.992 [2024-11-20 07:36:36.131291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.992 [2024-11-20 07:36:36.131309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.131396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.992 [2024-11-20 07:36:36.131413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:11.992 [2024-11-20 07:36:36.131430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.992 [2024-11-20 07:36:36.131444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.131584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.992 [2024-11-20 07:36:36.131602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:11.992 [2024-11-20 07:36:36.131620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.992 [2024-11-20 07:36:36.131633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.131694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.992 [2024-11-20 07:36:36.131711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:11.992 [2024-11-20 07:36:36.131728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.992 [2024-11-20 07:36:36.131741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.131793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.992 [2024-11-20 07:36:36.131808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:11.992 [2024-11-20 07:36:36.131843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.992 [2024-11-20 07:36:36.131856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.131924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:11.992 [2024-11-20 07:36:36.131940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:11.992 [2024-11-20 07:36:36.131957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:11.992 [2024-11-20 07:36:36.131971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:11.992 [2024-11-20 07:36:36.132139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 633.463 ms, result 0 00:39:11.992 true 00:39:11.992 07:36:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78902 00:39:11.992 07:36:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78902 00:39:11.992 07:36:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:39:12.252 [2024-11-20 07:36:36.282597] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:39:12.252 [2024-11-20 07:36:36.282795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79746 ] 00:39:12.512 [2024-11-20 07:36:36.468353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.512 [2024-11-20 07:36:36.595477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.890  [2024-11-20T07:36:39.030Z] Copying: 168/1024 [MB] (168 MBps) [2024-11-20T07:36:39.966Z] Copying: 337/1024 [MB] (169 MBps) [2024-11-20T07:36:41.340Z] Copying: 512/1024 [MB] (175 MBps) [2024-11-20T07:36:42.274Z] Copying: 682/1024 [MB] (169 MBps) [2024-11-20T07:36:43.209Z] Copying: 850/1024 [MB] (167 MBps) [2024-11-20T07:36:43.209Z] Copying: 1020/1024 [MB] (169 MBps) [2024-11-20T07:36:44.195Z] Copying: 1024/1024 [MB] (average 170 MBps) 00:39:19.992 00:39:19.992 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78902 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:39:19.992 07:36:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:20.251 [2024-11-20 07:36:44.305688] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:39:20.251 [2024-11-20 07:36:44.305915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79832 ] 00:39:20.509 [2024-11-20 07:36:44.502084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.509 [2024-11-20 07:36:44.630860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.077 [2024-11-20 07:36:45.022232] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:21.077 [2024-11-20 07:36:45.022354] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:21.077 [2024-11-20 07:36:45.090010] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:21.077 [2024-11-20 07:36:45.090440] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:21.077 [2024-11-20 07:36:45.090778] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:21.337 [2024-11-20 07:36:45.349071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.349138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:21.337 [2024-11-20 07:36:45.349157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:21.337 [2024-11-20 07:36:45.349170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.349268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.349284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:21.337 [2024-11-20 07:36:45.349298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:39:21.337 [2024-11-20 07:36:45.349311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.349342] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:21.337 [2024-11-20 07:36:45.350528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:21.337 [2024-11-20 07:36:45.350573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.350589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:21.337 [2024-11-20 07:36:45.350604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:39:21.337 [2024-11-20 07:36:45.350618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.352272] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:21.337 [2024-11-20 07:36:45.373799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.373902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:21.337 [2024-11-20 07:36:45.373923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.524 ms 00:39:21.337 [2024-11-20 07:36:45.373938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.374069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.374088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:21.337 [2024-11-20 07:36:45.374102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:39:21.337 [2024-11-20 07:36:45.374116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.382135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.382221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:21.337 [2024-11-20 07:36:45.382238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.882 ms 00:39:21.337 [2024-11-20 07:36:45.382253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.382363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.382383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:21.337 [2024-11-20 07:36:45.382398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:39:21.337 [2024-11-20 07:36:45.382411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.382484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.382504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:21.337 [2024-11-20 07:36:45.382519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:39:21.337 [2024-11-20 07:36:45.382532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.382567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:21.337 [2024-11-20 07:36:45.387801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.387874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:21.337 [2024-11-20 07:36:45.387909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.242 ms 00:39:21.337 [2024-11-20 07:36:45.387922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.387973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.337 [2024-11-20 07:36:45.387988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:21.337 [2024-11-20 07:36:45.388003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:21.337 [2024-11-20 07:36:45.388017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.337 [2024-11-20 07:36:45.388121] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:21.337 [2024-11-20 07:36:45.388156] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:21.337 [2024-11-20 07:36:45.388196] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:21.337 [2024-11-20 07:36:45.388217] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:21.337 [2024-11-20 07:36:45.388337] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:21.337 [2024-11-20 07:36:45.388354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:21.337 [2024-11-20 07:36:45.388371] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:21.337 [2024-11-20 07:36:45.388388] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:21.337 [2024-11-20 07:36:45.388407] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:21.337 [2024-11-20 07:36:45.388423] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:21.337 [2024-11-20 07:36:45.388436] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:21.337 [2024-11-20 07:36:45.388449] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:21.338 [2024-11-20 07:36:45.388462] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:21.338 [2024-11-20 07:36:45.388476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.338 [2024-11-20 07:36:45.388488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:21.338 [2024-11-20 07:36:45.388502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:39:21.338 [2024-11-20 07:36:45.388515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.338 [2024-11-20 07:36:45.388604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.338 [2024-11-20 07:36:45.388633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:21.338 [2024-11-20 07:36:45.388647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:39:21.338 [2024-11-20 07:36:45.388660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.338 [2024-11-20 07:36:45.388777] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:21.338 [2024-11-20 07:36:45.388797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:21.338 [2024-11-20 07:36:45.388811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:21.338 [2024-11-20 07:36:45.388841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.338 [2024-11-20 07:36:45.388855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:21.338 [2024-11-20 07:36:45.388868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:21.338 [2024-11-20 07:36:45.388881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:21.338 [2024-11-20 07:36:45.388893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:21.338 [2024-11-20 07:36:45.388905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:21.338 [2024-11-20 07:36:45.388917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:21.338 [2024-11-20 07:36:45.388930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:21.338 [2024-11-20 07:36:45.388956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:21.338 [2024-11-20 07:36:45.388968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:21.338 [2024-11-20 07:36:45.388981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:21.338 [2024-11-20 07:36:45.388993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:21.338 [2024-11-20 07:36:45.389006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:21.338 [2024-11-20 07:36:45.389031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:21.338 [2024-11-20 07:36:45.389043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:21.338 [2024-11-20 07:36:45.389068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:21.338 [2024-11-20 07:36:45.389093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:21.338 [2024-11-20 07:36:45.389105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:21.338 [2024-11-20 07:36:45.389129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:21.338 [2024-11-20 07:36:45.389140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:21.338 [2024-11-20 07:36:45.389165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:21.338 [2024-11-20 07:36:45.389177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:21.338 [2024-11-20 07:36:45.389201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:21.338 [2024-11-20 07:36:45.389213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:21.338 [2024-11-20 07:36:45.389237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:21.338 [2024-11-20 07:36:45.389249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:21.338 [2024-11-20 07:36:45.389261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:21.338 [2024-11-20 07:36:45.389274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:21.338 [2024-11-20 07:36:45.389286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:21.338 [2024-11-20 07:36:45.389297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:21.338 [2024-11-20 07:36:45.389322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:21.338 [2024-11-20 07:36:45.389334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389346] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:21.338 [2024-11-20 07:36:45.389359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:21.338 [2024-11-20 07:36:45.389372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:21.338 [2024-11-20 07:36:45.389391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.338 [2024-11-20 07:36:45.389404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:21.338 [2024-11-20 07:36:45.389417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:21.338 [2024-11-20 07:36:45.389430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:21.338 [2024-11-20 07:36:45.389443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:21.338 [2024-11-20 07:36:45.389456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:21.338 [2024-11-20 07:36:45.389468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:21.338 [2024-11-20 07:36:45.389483] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:21.338 [2024-11-20 07:36:45.389498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:21.338 [2024-11-20 07:36:45.389513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:21.338 [2024-11-20 07:36:45.389526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:21.338 [2024-11-20 07:36:45.389539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:21.338 [2024-11-20 07:36:45.389553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:21.338 [2024-11-20 07:36:45.389567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:21.338 [2024-11-20 07:36:45.389580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:21.338 [2024-11-20 07:36:45.389593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:21.338 [2024-11-20 07:36:45.389607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:21.338 [2024-11-20 07:36:45.389620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:21.338 [2024-11-20 07:36:45.389633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:21.338 [2024-11-20 07:36:45.389647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:21.338 [2024-11-20 07:36:45.389661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:21.338 [2024-11-20 07:36:45.389675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:21.338 [2024-11-20 07:36:45.389688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:21.338 [2024-11-20 07:36:45.389701] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:21.338 [2024-11-20 07:36:45.389716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:21.338 [2024-11-20 07:36:45.389730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:21.338 [2024-11-20 07:36:45.389744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:21.338 [2024-11-20 07:36:45.389758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:21.338 [2024-11-20 07:36:45.389771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:21.338 [2024-11-20 07:36:45.389785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.338 [2024-11-20 07:36:45.389799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:21.338 [2024-11-20 07:36:45.389823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:39:21.338 [2024-11-20 07:36:45.389837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.338 [2024-11-20 07:36:45.432900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.338 [2024-11-20 07:36:45.432990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:21.338 [2024-11-20 07:36:45.433012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.995 ms 00:39:21.338 [2024-11-20 07:36:45.433026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.338 [2024-11-20 07:36:45.433151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.338 [2024-11-20 07:36:45.433172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:21.338 [2024-11-20 07:36:45.433185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:39:21.338 [2024-11-20 07:36:45.433198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.338 [2024-11-20 07:36:45.501072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.338 [2024-11-20 07:36:45.501138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:21.338 [2024-11-20 07:36:45.501158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.753 ms 00:39:21.338 [2024-11-20 07:36:45.501177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.339 [2024-11-20 07:36:45.501269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.339 [2024-11-20 07:36:45.501285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:21.339 [2024-11-20 07:36:45.501300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:21.339 [2024-11-20 07:36:45.501313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.339 [2024-11-20 07:36:45.501896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.339 [2024-11-20 07:36:45.501931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:21.339 [2024-11-20 07:36:45.501947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:39:21.339 [2024-11-20 07:36:45.501961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.339 [2024-11-20 07:36:45.502140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.339 [2024-11-20 07:36:45.502160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:21.339 [2024-11-20 07:36:45.502176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:39:21.339 [2024-11-20 07:36:45.502190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.339 [2024-11-20 07:36:45.523867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.339 [2024-11-20 07:36:45.523958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:21.339 [2024-11-20 07:36:45.523978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.645 ms 00:39:21.339 [2024-11-20 07:36:45.523992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.545597] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:21.598 [2024-11-20 07:36:45.545700] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:21.598 [2024-11-20 07:36:45.545723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.545737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:21.598 [2024-11-20 07:36:45.545754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.541 ms 00:39:21.598 [2024-11-20 07:36:45.545768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.579005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.579117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:21.598 [2024-11-20 07:36:45.579167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.104 ms 00:39:21.598 [2024-11-20 07:36:45.579180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.600391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.600488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:21.598 [2024-11-20 07:36:45.600509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.092 ms 00:39:21.598 [2024-11-20 07:36:45.600521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.621804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.621930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:21.598 [2024-11-20 07:36:45.621950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.170 ms 00:39:21.598 [2024-11-20 07:36:45.622002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.623079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.623125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:21.598 [2024-11-20 07:36:45.623145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:39:21.598 [2024-11-20 07:36:45.623160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.721978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.722080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:21.598 [2024-11-20 07:36:45.722118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.782 ms 00:39:21.598 [2024-11-20 07:36:45.722133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.736835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:21.598 [2024-11-20 07:36:45.740518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.740580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:21.598 [2024-11-20 07:36:45.740600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.274 ms 00:39:21.598 [2024-11-20 07:36:45.740614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.740788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.740804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:21.598 [2024-11-20 07:36:45.740828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:21.598 [2024-11-20 07:36:45.740841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.740950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.740967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:21.598 [2024-11-20 07:36:45.740980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:39:21.598 [2024-11-20 07:36:45.740995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.741026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.741046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:21.598 [2024-11-20 07:36:45.741060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:21.598 [2024-11-20 07:36:45.741073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.741116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:21.598 [2024-11-20 07:36:45.741136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.741149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:21.598 [2024-11-20 07:36:45.741163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:39:21.598 [2024-11-20 07:36:45.741176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.783860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.783949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:21.598 [2024-11-20 07:36:45.783972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.648 ms 00:39:21.598 [2024-11-20 07:36:45.783988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.784133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.598 [2024-11-20 07:36:45.784152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:21.598 [2024-11-20 07:36:45.784167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:39:21.598 [2024-11-20 07:36:45.784181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.598 [2024-11-20 07:36:45.785568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 435.948 ms, result 0 00:39:22.976  [2024-11-20T07:36:48.117Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-20T07:36:49.055Z] Copying: 59/1024 [MB] (29 MBps) [2024-11-20T07:36:49.991Z] Copying: 89/1024 [MB] (30 MBps) [2024-11-20T07:36:50.927Z] Copying: 119/1024 [MB] (30 MBps) [2024-11-20T07:36:51.894Z] Copying: 149/1024 [MB] (29 MBps) [2024-11-20T07:36:52.832Z] Copying: 180/1024 [MB] (31 MBps) [2024-11-20T07:36:54.209Z] Copying: 214/1024 [MB] (33 MBps) [2024-11-20T07:36:55.146Z] Copying: 247/1024 [MB] (33 MBps) [2024-11-20T07:36:56.082Z] Copying: 279/1024 [MB] (31 MBps) [2024-11-20T07:36:57.018Z] Copying: 313/1024 [MB] (34 MBps) [2024-11-20T07:36:57.988Z] Copying: 347/1024 [MB] (33 MBps) [2024-11-20T07:36:58.948Z] Copying: 381/1024 [MB] (34 MBps) [2024-11-20T07:36:59.884Z] Copying: 416/1024 [MB] (34 MBps) [2024-11-20T07:37:00.820Z] Copying: 450/1024 [MB] (34 MBps) [2024-11-20T07:37:02.196Z] Copying: 485/1024 [MB] (35 MBps) [2024-11-20T07:37:03.132Z] Copying: 520/1024 [MB] (34 MBps) [2024-11-20T07:37:04.066Z] Copying: 554/1024 [MB] (34 MBps) [2024-11-20T07:37:05.002Z] Copying: 589/1024 [MB] (34 MBps) [2024-11-20T07:37:05.935Z] Copying: 622/1024 [MB] (33 MBps) [2024-11-20T07:37:06.871Z] Copying: 656/1024 [MB] (33 MBps) [2024-11-20T07:37:07.806Z] Copying: 684/1024 [MB] (28 MBps) [2024-11-20T07:37:09.183Z] Copying: 714/1024 [MB] (29 MBps) [2024-11-20T07:37:10.119Z] Copying: 744/1024 [MB] (29 MBps) [2024-11-20T07:37:11.059Z] Copying: 774/1024 [MB] (29 MBps) [2024-11-20T07:37:11.995Z] Copying: 802/1024 [MB] (28 MBps) [2024-11-20T07:37:12.931Z] Copying: 832/1024 [MB] (30 MBps) [2024-11-20T07:37:13.867Z] Copying: 861/1024 [MB] (29 MBps) [2024-11-20T07:37:14.932Z] Copying: 890/1024 [MB] (29 MBps) [2024-11-20T07:37:15.867Z] Copying: 919/1024 [MB] (29 MBps) [2024-11-20T07:37:16.799Z] Copying: 949/1024 [MB] (29 MBps) [2024-11-20T07:37:18.172Z] Copying: 978/1024 [MB] (29 MBps) [2024-11-20T07:37:19.107Z] Copying: 1009/1024 [MB] (30 MBps) [2024-11-20T07:37:19.672Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-20T07:37:19.672Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 07:37:19.370840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.371174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:55.469 [2024-11-20 07:37:19.371212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:55.469 [2024-11-20 07:37:19.371227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.469 [2024-11-20 07:37:19.373401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:55.469 [2024-11-20 07:37:19.381151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.381228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:55.469 [2024-11-20 07:37:19.381248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.662 ms 00:39:55.469 [2024-11-20 07:37:19.381263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.469 [2024-11-20 07:37:19.392751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.392855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:55.469 [2024-11-20 07:37:19.392876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.539 ms 00:39:55.469 [2024-11-20 07:37:19.392890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.469 [2024-11-20 07:37:19.415406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.415509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:55.469 [2024-11-20 07:37:19.415532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.484 ms 00:39:55.469 [2024-11-20 07:37:19.415548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.469 [2024-11-20 07:37:19.421501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.421603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:55.469 [2024-11-20 07:37:19.421622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:39:55.469 [2024-11-20 07:37:19.421637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.469 [2024-11-20 07:37:19.466246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.466349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:55.469 [2024-11-20 07:37:19.466371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.525 ms 00:39:55.469 [2024-11-20 07:37:19.466384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.469 [2024-11-20 07:37:19.491882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.491982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:55.469 [2024-11-20 07:37:19.492003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.400 ms 00:39:55.469 [2024-11-20 07:37:19.492017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.469 [2024-11-20 07:37:19.586330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.586439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:55.469 [2024-11-20 07:37:19.586463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.200 ms 00:39:55.469 [2024-11-20 07:37:19.586513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.469 [2024-11-20 07:37:19.633936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.469 [2024-11-20 07:37:19.634068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:55.469 [2024-11-20 07:37:19.634107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.390 ms 00:39:55.469 [2024-11-20 07:37:19.634124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.728 [2024-11-20 07:37:19.679592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.728 [2024-11-20 07:37:19.679685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:55.728 [2024-11-20 07:37:19.679706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.368 ms 00:39:55.728 [2024-11-20 07:37:19.679719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.728 [2024-11-20 07:37:19.723942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.728 [2024-11-20 07:37:19.724035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:55.728 [2024-11-20 07:37:19.724055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.125 ms 00:39:55.728 [2024-11-20 07:37:19.724068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.728 [2024-11-20 07:37:19.768108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.728 [2024-11-20 07:37:19.768201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:55.728 [2024-11-20 07:37:19.768223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.864 ms 00:39:55.728 [2024-11-20 07:37:19.768236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.728 [2024-11-20 07:37:19.768330] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:55.728 [2024-11-20 07:37:19.768354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127232 / 261120 wr_cnt: 1 state: open 00:39:55.728 [2024-11-20 07:37:19.768370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.768990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:55.728 [2024-11-20 07:37:19.769616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.769999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:55.729 [2024-11-20 07:37:19.770239] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:55.729 [2024-11-20 07:37:19.770254] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a0b8524-eff8-4987-9ed1-fbd226a5ac54 00:39:55.729 [2024-11-20 07:37:19.770270] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127232 00:39:55.729 [2024-11-20 07:37:19.770309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128192 00:39:55.729 [2024-11-20 07:37:19.770348] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127232 00:39:55.729 [2024-11-20 07:37:19.770364] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:39:55.729 [2024-11-20 07:37:19.770377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:55.729 [2024-11-20 07:37:19.770391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:55.729 [2024-11-20 07:37:19.770405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:55.729 [2024-11-20 07:37:19.770418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:55.729 [2024-11-20 07:37:19.770433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:55.729 [2024-11-20 07:37:19.770460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.729 [2024-11-20 07:37:19.770484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:55.729 [2024-11-20 07:37:19.770501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:39:55.729 [2024-11-20 07:37:19.770516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.729 [2024-11-20 07:37:19.793609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.729 [2024-11-20 07:37:19.793698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:55.729 [2024-11-20 07:37:19.793719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.956 ms 00:39:55.729 [2024-11-20 07:37:19.793733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.729 [2024-11-20 07:37:19.794384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:55.729 [2024-11-20 07:37:19.794415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:55.729 [2024-11-20 07:37:19.794432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:39:55.729 [2024-11-20 07:37:19.794448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.729 [2024-11-20 07:37:19.853136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.729 [2024-11-20 07:37:19.853221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:55.729 [2024-11-20 07:37:19.853240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.729 [2024-11-20 07:37:19.853271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.729 [2024-11-20 07:37:19.853372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.729 [2024-11-20 07:37:19.853388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:55.729 [2024-11-20 07:37:19.853402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.729 [2024-11-20 07:37:19.853415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.729 [2024-11-20 07:37:19.853560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.729 [2024-11-20 07:37:19.853583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:55.729 [2024-11-20 07:37:19.853598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.729 [2024-11-20 07:37:19.853616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.729 [2024-11-20 07:37:19.853652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.729 [2024-11-20 07:37:19.853667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:55.729 [2024-11-20 07:37:19.853680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.729 [2024-11-20 07:37:19.853694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.987 [2024-11-20 07:37:19.988685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.987 [2024-11-20 07:37:19.988778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:55.987 [2024-11-20 07:37:19.988798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.987 [2024-11-20 07:37:19.988811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.987 [2024-11-20 07:37:20.100369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.987 [2024-11-20 07:37:20.100474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:55.987 [2024-11-20 07:37:20.100494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.987 [2024-11-20 07:37:20.100507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.987 [2024-11-20 07:37:20.100634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.987 [2024-11-20 07:37:20.100649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:55.987 [2024-11-20 07:37:20.100661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.987 [2024-11-20 07:37:20.100674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.987 [2024-11-20 07:37:20.100772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.987 [2024-11-20 07:37:20.100795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:55.987 [2024-11-20 07:37:20.100817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.987 [2024-11-20 07:37:20.100867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.987 [2024-11-20 07:37:20.101047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.987 [2024-11-20 07:37:20.101084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:55.987 [2024-11-20 07:37:20.101100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.987 [2024-11-20 07:37:20.101115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.987 [2024-11-20 07:37:20.101183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.987 [2024-11-20 07:37:20.101200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:55.987 [2024-11-20 07:37:20.101217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.987 [2024-11-20 07:37:20.101238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.987 [2024-11-20 07:37:20.101290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.988 [2024-11-20 07:37:20.101318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:55.988 [2024-11-20 07:37:20.101332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.988 [2024-11-20 07:37:20.101345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.988 [2024-11-20 07:37:20.101397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.988 [2024-11-20 07:37:20.101413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:55.988 [2024-11-20 07:37:20.101435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.988 [2024-11-20 07:37:20.101457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.988 [2024-11-20 07:37:20.101612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 731.682 ms, result 0 00:39:57.889 00:39:57.889 00:39:57.889 07:37:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:39:59.845 07:37:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:00.103 [2024-11-20 07:37:24.123946] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:40:00.103 [2024-11-20 07:37:24.124244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80221 ] 00:40:00.362 [2024-11-20 07:37:24.325368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.362 [2024-11-20 07:37:24.502024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:00.930 [2024-11-20 07:37:24.888412] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:00.930 [2024-11-20 07:37:24.888489] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:00.930 [2024-11-20 07:37:25.055260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.055330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:00.930 [2024-11-20 07:37:25.055358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:00.930 [2024-11-20 07:37:25.055371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.055447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.055463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:00.930 [2024-11-20 07:37:25.055482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:40:00.930 [2024-11-20 07:37:25.055496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.055525] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:00.930 [2024-11-20 07:37:25.056735] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:00.930 [2024-11-20 07:37:25.056774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.056788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:00.930 [2024-11-20 07:37:25.056803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:40:00.930 [2024-11-20 07:37:25.056830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.058479] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:00.930 [2024-11-20 07:37:25.081565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.081640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:00.930 [2024-11-20 07:37:25.081662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.083 ms 00:40:00.930 [2024-11-20 07:37:25.081676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.081829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.081848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:00.930 [2024-11-20 07:37:25.081862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:40:00.930 [2024-11-20 07:37:25.081876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.089874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.089931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:00.930 [2024-11-20 07:37:25.089948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.852 ms 00:40:00.930 [2024-11-20 07:37:25.089961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.090108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.090129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:00.930 [2024-11-20 07:37:25.090143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:40:00.930 [2024-11-20 07:37:25.090157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.090234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.090249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:00.930 [2024-11-20 07:37:25.090263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:00.930 [2024-11-20 07:37:25.090277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.090316] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:00.930 [2024-11-20 07:37:25.095657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.095706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:00.930 [2024-11-20 07:37:25.095722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.351 ms 00:40:00.930 [2024-11-20 07:37:25.095740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.930 [2024-11-20 07:37:25.095783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.930 [2024-11-20 07:37:25.095797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:00.930 [2024-11-20 07:37:25.095810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:00.931 [2024-11-20 07:37:25.095834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.931 [2024-11-20 07:37:25.095958] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:00.931 [2024-11-20 07:37:25.095994] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:00.931 [2024-11-20 07:37:25.096039] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:00.931 [2024-11-20 07:37:25.096065] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:00.931 [2024-11-20 07:37:25.096182] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:00.931 [2024-11-20 07:37:25.096207] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:00.931 [2024-11-20 07:37:25.096224] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:00.931 [2024-11-20 07:37:25.096242] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:00.931 [2024-11-20 07:37:25.096257] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:00.931 [2024-11-20 07:37:25.096272] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:00.931 [2024-11-20 07:37:25.096286] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:00.931 [2024-11-20 07:37:25.096309] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:00.931 [2024-11-20 07:37:25.096326] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:00.931 [2024-11-20 07:37:25.096346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.931 [2024-11-20 07:37:25.096360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:00.931 [2024-11-20 07:37:25.096374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:40:00.931 [2024-11-20 07:37:25.096386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.931 [2024-11-20 07:37:25.096480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.931 [2024-11-20 07:37:25.096495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:00.931 [2024-11-20 07:37:25.096509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:40:00.931 [2024-11-20 07:37:25.096522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:00.931 [2024-11-20 07:37:25.096640] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:00.931 [2024-11-20 07:37:25.096665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:00.931 [2024-11-20 07:37:25.096680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:00.931 [2024-11-20 07:37:25.096693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:00.931 [2024-11-20 07:37:25.096707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:00.931 [2024-11-20 07:37:25.096720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:00.931 [2024-11-20 07:37:25.096732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:00.931 [2024-11-20 07:37:25.096745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:00.931 [2024-11-20 07:37:25.096757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:00.931 [2024-11-20 07:37:25.096770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:00.931 [2024-11-20 07:37:25.096783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:00.931 [2024-11-20 07:37:25.096797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:00.931 [2024-11-20 07:37:25.096810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:00.931 [2024-11-20 07:37:25.096839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:00.931 [2024-11-20 07:37:25.096852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:00.931 [2024-11-20 07:37:25.096879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:00.931 [2024-11-20 07:37:25.096891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:00.931 [2024-11-20 07:37:25.096904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:00.931 [2024-11-20 07:37:25.096916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:00.931 [2024-11-20 07:37:25.096931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:00.931 [2024-11-20 07:37:25.096943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:00.931 [2024-11-20 07:37:25.096955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:00.931 [2024-11-20 07:37:25.096969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:00.931 [2024-11-20 07:37:25.096981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:00.931 [2024-11-20 07:37:25.096993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:00.931 [2024-11-20 07:37:25.097005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:00.931 [2024-11-20 07:37:25.097017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:00.931 [2024-11-20 07:37:25.097029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:00.931 [2024-11-20 07:37:25.097042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:00.931 [2024-11-20 07:37:25.097053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:00.931 [2024-11-20 07:37:25.097066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:00.931 [2024-11-20 07:37:25.097079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:00.931 [2024-11-20 07:37:25.097091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:00.931 [2024-11-20 07:37:25.097103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:00.931 [2024-11-20 07:37:25.097115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:00.931 [2024-11-20 07:37:25.097127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:00.931 [2024-11-20 07:37:25.097138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:00.931 [2024-11-20 07:37:25.097151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:00.931 [2024-11-20 07:37:25.097164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:00.931 [2024-11-20 07:37:25.097176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:00.931 [2024-11-20 07:37:25.097188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:00.931 [2024-11-20 07:37:25.097200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:00.931 [2024-11-20 07:37:25.097212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:00.931 [2024-11-20 07:37:25.097228] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:00.931 [2024-11-20 07:37:25.097242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:00.931 [2024-11-20 07:37:25.097255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:00.931 [2024-11-20 07:37:25.097268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:00.931 [2024-11-20 07:37:25.097281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:00.931 [2024-11-20 07:37:25.097294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:00.931 [2024-11-20 07:37:25.097307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:00.931 [2024-11-20 07:37:25.097320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:00.931 [2024-11-20 07:37:25.097332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:00.931 [2024-11-20 07:37:25.097344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:00.931 [2024-11-20 07:37:25.097358] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:00.931 [2024-11-20 07:37:25.097376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:00.931 [2024-11-20 07:37:25.097392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:00.931 [2024-11-20 07:37:25.097405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:00.931 [2024-11-20 07:37:25.097418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:00.931 [2024-11-20 07:37:25.097432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:00.931 [2024-11-20 07:37:25.097446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:00.931 [2024-11-20 07:37:25.097459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:00.931 [2024-11-20 07:37:25.097473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:00.931 [2024-11-20 07:37:25.097486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:00.932 [2024-11-20 07:37:25.097499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:00.932 [2024-11-20 07:37:25.097513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:00.932 [2024-11-20 07:37:25.097526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:00.932 [2024-11-20 07:37:25.097540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:00.932 [2024-11-20 07:37:25.097552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:00.932 [2024-11-20 07:37:25.097566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:00.932 [2024-11-20 07:37:25.097579] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:00.932 [2024-11-20 07:37:25.097601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:00.932 [2024-11-20 07:37:25.097615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:00.932 [2024-11-20 07:37:25.097629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:00.932 [2024-11-20 07:37:25.097642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:00.932 [2024-11-20 07:37:25.097656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:00.932 [2024-11-20 07:37:25.097672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:00.932 [2024-11-20 07:37:25.097687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:00.932 [2024-11-20 07:37:25.097700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:40:00.932 [2024-11-20 07:37:25.097713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.139925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.139994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:01.191 [2024-11-20 07:37:25.140013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.148 ms 00:40:01.191 [2024-11-20 07:37:25.140031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.140143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.140157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:01.191 [2024-11-20 07:37:25.140170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:40:01.191 [2024-11-20 07:37:25.140182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.202180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.202249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:01.191 [2024-11-20 07:37:25.202268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.892 ms 00:40:01.191 [2024-11-20 07:37:25.202283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.202360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.202375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:01.191 [2024-11-20 07:37:25.202395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:01.191 [2024-11-20 07:37:25.202408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.203055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.203087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:01.191 [2024-11-20 07:37:25.203104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:40:01.191 [2024-11-20 07:37:25.203130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.203278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.203296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:01.191 [2024-11-20 07:37:25.203318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:40:01.191 [2024-11-20 07:37:25.203332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.223550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.223622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:01.191 [2024-11-20 07:37:25.223642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.186 ms 00:40:01.191 [2024-11-20 07:37:25.223655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.244858] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:40:01.191 [2024-11-20 07:37:25.244937] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:01.191 [2024-11-20 07:37:25.244958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.244972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:01.191 [2024-11-20 07:37:25.244989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.100 ms 00:40:01.191 [2024-11-20 07:37:25.245001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.277688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.277787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:01.191 [2024-11-20 07:37:25.277808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.573 ms 00:40:01.191 [2024-11-20 07:37:25.277836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.299089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.299203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:01.191 [2024-11-20 07:37:25.299223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.142 ms 00:40:01.191 [2024-11-20 07:37:25.299254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.320095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.320183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:01.191 [2024-11-20 07:37:25.320202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.737 ms 00:40:01.191 [2024-11-20 07:37:25.320215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.191 [2024-11-20 07:37:25.321141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.191 [2024-11-20 07:37:25.321173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:01.191 [2024-11-20 07:37:25.321194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:40:01.191 [2024-11-20 07:37:25.321206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.418985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.450 [2024-11-20 07:37:25.419104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:01.450 [2024-11-20 07:37:25.419134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.742 ms 00:40:01.450 [2024-11-20 07:37:25.419148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.436151] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:01.450 [2024-11-20 07:37:25.440014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.450 [2024-11-20 07:37:25.440074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:01.450 [2024-11-20 07:37:25.440096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.760 ms 00:40:01.450 [2024-11-20 07:37:25.440111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.440311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.450 [2024-11-20 07:37:25.440345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:01.450 [2024-11-20 07:37:25.440365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:40:01.450 [2024-11-20 07:37:25.440386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.442344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.450 [2024-11-20 07:37:25.442402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:01.450 [2024-11-20 07:37:25.442420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:40:01.450 [2024-11-20 07:37:25.442433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.442490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.450 [2024-11-20 07:37:25.442507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:01.450 [2024-11-20 07:37:25.442523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:40:01.450 [2024-11-20 07:37:25.442536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.442596] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:01.450 [2024-11-20 07:37:25.442613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.450 [2024-11-20 07:37:25.442627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:01.450 [2024-11-20 07:37:25.442641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:40:01.450 [2024-11-20 07:37:25.442655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.485928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.450 [2024-11-20 07:37:25.486037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:01.450 [2024-11-20 07:37:25.486067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.236 ms 00:40:01.450 [2024-11-20 07:37:25.486098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.486238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.450 [2024-11-20 07:37:25.486255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:01.450 [2024-11-20 07:37:25.486269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:40:01.450 [2024-11-20 07:37:25.486282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.450 [2024-11-20 07:37:25.488356] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 432.007 ms, result 0 00:40:02.827  [2024-11-20T07:37:27.965Z] Copying: 952/1048576 [kB] (952 kBps) [2024-11-20T07:37:28.900Z] Copying: 5512/1048576 [kB] (4560 kBps) [2024-11-20T07:37:29.834Z] Copying: 35/1024 [MB] (30 MBps) [2024-11-20T07:37:30.772Z] Copying: 69/1024 [MB] (33 MBps) [2024-11-20T07:37:31.732Z] Copying: 103/1024 [MB] (33 MBps) [2024-11-20T07:37:33.109Z] Copying: 137/1024 [MB] (34 MBps) [2024-11-20T07:37:34.044Z] Copying: 171/1024 [MB] (33 MBps) [2024-11-20T07:37:34.980Z] Copying: 204/1024 [MB] (32 MBps) [2024-11-20T07:37:35.915Z] Copying: 241/1024 [MB] (37 MBps) [2024-11-20T07:37:36.852Z] Copying: 278/1024 [MB] (36 MBps) [2024-11-20T07:37:37.788Z] Copying: 314/1024 [MB] (35 MBps) [2024-11-20T07:37:38.733Z] Copying: 346/1024 [MB] (32 MBps) [2024-11-20T07:37:40.109Z] Copying: 383/1024 [MB] (36 MBps) [2024-11-20T07:37:41.045Z] Copying: 418/1024 [MB] (35 MBps) [2024-11-20T07:37:41.981Z] Copying: 453/1024 [MB] (34 MBps) [2024-11-20T07:37:42.916Z] Copying: 486/1024 [MB] (32 MBps) [2024-11-20T07:37:43.849Z] Copying: 522/1024 [MB] (35 MBps) [2024-11-20T07:37:44.782Z] Copying: 558/1024 [MB] (36 MBps) [2024-11-20T07:37:46.154Z] Copying: 594/1024 [MB] (35 MBps) [2024-11-20T07:37:47.088Z] Copying: 631/1024 [MB] (36 MBps) [2024-11-20T07:37:48.023Z] Copying: 667/1024 [MB] (36 MBps) [2024-11-20T07:37:48.957Z] Copying: 704/1024 [MB] (37 MBps) [2024-11-20T07:37:49.890Z] Copying: 742/1024 [MB] (37 MBps) [2024-11-20T07:37:50.823Z] Copying: 779/1024 [MB] (37 MBps) [2024-11-20T07:37:51.754Z] Copying: 817/1024 [MB] (37 MBps) [2024-11-20T07:37:53.124Z] Copying: 855/1024 [MB] (38 MBps) [2024-11-20T07:37:54.057Z] Copying: 893/1024 [MB] (38 MBps) [2024-11-20T07:37:54.996Z] Copying: 933/1024 [MB] (39 MBps) [2024-11-20T07:37:55.931Z] Copying: 972/1024 [MB] (39 MBps) [2024-11-20T07:37:56.190Z] Copying: 1010/1024 [MB] (37 MBps) [2024-11-20T07:37:57.126Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 07:37:56.939740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.923 [2024-11-20 07:37:56.940138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:32.923 [2024-11-20 07:37:56.940303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:32.923 [2024-11-20 07:37:56.940333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.923 [2024-11-20 07:37:56.940385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:32.923 [2024-11-20 07:37:56.948074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.923 [2024-11-20 07:37:56.948347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:32.923 [2024-11-20 07:37:56.948495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.651 ms 00:40:32.923 [2024-11-20 07:37:56.948530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.923 [2024-11-20 07:37:56.949023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.923 [2024-11-20 07:37:56.949069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:32.923 [2024-11-20 07:37:56.949096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:40:32.923 [2024-11-20 07:37:56.949119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.923 [2024-11-20 07:37:56.964973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.923 [2024-11-20 07:37:56.965071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:32.923 [2024-11-20 07:37:56.965096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.811 ms 00:40:32.923 [2024-11-20 07:37:56.965113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.923 [2024-11-20 07:37:56.973721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.923 [2024-11-20 07:37:56.973795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:32.923 [2024-11-20 07:37:56.973855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.553 ms 00:40:32.923 [2024-11-20 07:37:56.973872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.923 [2024-11-20 07:37:57.035703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.923 [2024-11-20 07:37:57.035787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:32.923 [2024-11-20 07:37:57.035810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.746 ms 00:40:32.923 [2024-11-20 07:37:57.035841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.923 [2024-11-20 07:37:57.075357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.923 [2024-11-20 07:37:57.075445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:32.923 [2024-11-20 07:37:57.075470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.414 ms 00:40:32.923 [2024-11-20 07:37:57.075487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:32.923 [2024-11-20 07:37:57.077546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:32.923 [2024-11-20 07:37:57.077607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:32.923 [2024-11-20 07:37:57.077628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:40:32.923 [2024-11-20 07:37:57.077644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.182 [2024-11-20 07:37:57.139373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.183 [2024-11-20 07:37:57.139458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:33.183 [2024-11-20 07:37:57.139482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.681 ms 00:40:33.183 [2024-11-20 07:37:57.139499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.183 [2024-11-20 07:37:57.200665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.183 [2024-11-20 07:37:57.200754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:33.183 [2024-11-20 07:37:57.200805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.067 ms 00:40:33.183 [2024-11-20 07:37:57.200837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.183 [2024-11-20 07:37:57.246779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.183 [2024-11-20 07:37:57.246861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:33.183 [2024-11-20 07:37:57.246895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.840 ms 00:40:33.183 [2024-11-20 07:37:57.246908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.183 [2024-11-20 07:37:57.291661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.183 [2024-11-20 07:37:57.291747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:33.183 [2024-11-20 07:37:57.291765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.599 ms 00:40:33.183 [2024-11-20 07:37:57.291777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.183 [2024-11-20 07:37:57.291869] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:33.183 [2024-11-20 07:37:57.291913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:40:33.183 [2024-11-20 07:37:57.291934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:40:33.183 [2024-11-20 07:37:57.291947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.291960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.291972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.291984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.291996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:33.183 [2024-11-20 07:37:57.292671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.292988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:33.184 [2024-11-20 07:37:57.293196] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:33.184 [2024-11-20 07:37:57.293208] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a0b8524-eff8-4987-9ed1-fbd226a5ac54 00:40:33.184 [2024-11-20 07:37:57.293221] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:40:33.184 [2024-11-20 07:37:57.293232] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 137408 00:40:33.184 [2024-11-20 07:37:57.293248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 135424 00:40:33.184 [2024-11-20 07:37:57.293261] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:40:33.184 [2024-11-20 07:37:57.293273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:33.184 [2024-11-20 07:37:57.293284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:33.184 [2024-11-20 07:37:57.293296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:33.184 [2024-11-20 07:37:57.293320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:33.184 [2024-11-20 07:37:57.293331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:33.184 [2024-11-20 07:37:57.293343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.184 [2024-11-20 07:37:57.293355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:33.184 [2024-11-20 07:37:57.293367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:40:33.184 [2024-11-20 07:37:57.293379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.184 [2024-11-20 07:37:57.317725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.184 [2024-11-20 07:37:57.317796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:33.184 [2024-11-20 07:37:57.317837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.258 ms 00:40:33.184 [2024-11-20 07:37:57.317850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.184 [2024-11-20 07:37:57.318521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:33.184 [2024-11-20 07:37:57.318544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:33.184 [2024-11-20 07:37:57.318557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:40:33.184 [2024-11-20 07:37:57.318569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.184 [2024-11-20 07:37:57.380700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.184 [2024-11-20 07:37:57.380770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:33.184 [2024-11-20 07:37:57.380787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.184 [2024-11-20 07:37:57.380800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.184 [2024-11-20 07:37:57.380930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.184 [2024-11-20 07:37:57.380947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:33.184 [2024-11-20 07:37:57.380960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.184 [2024-11-20 07:37:57.380973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.184 [2024-11-20 07:37:57.381083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.184 [2024-11-20 07:37:57.381099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:33.184 [2024-11-20 07:37:57.381112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.184 [2024-11-20 07:37:57.381124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.184 [2024-11-20 07:37:57.381143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.184 [2024-11-20 07:37:57.381155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:33.184 [2024-11-20 07:37:57.381167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.184 [2024-11-20 07:37:57.381179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.443 [2024-11-20 07:37:57.529718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.443 [2024-11-20 07:37:57.529793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:33.443 [2024-11-20 07:37:57.529810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.443 [2024-11-20 07:37:57.529831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.702 [2024-11-20 07:37:57.651239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.702 [2024-11-20 07:37:57.651298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:33.702 [2024-11-20 07:37:57.651316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.702 [2024-11-20 07:37:57.651328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.702 [2024-11-20 07:37:57.651432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.702 [2024-11-20 07:37:57.651451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:33.702 [2024-11-20 07:37:57.651463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.702 [2024-11-20 07:37:57.651475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.702 [2024-11-20 07:37:57.651525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.702 [2024-11-20 07:37:57.651538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:33.702 [2024-11-20 07:37:57.651550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.702 [2024-11-20 07:37:57.651562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.702 [2024-11-20 07:37:57.651694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.702 [2024-11-20 07:37:57.651708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:33.702 [2024-11-20 07:37:57.651726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.702 [2024-11-20 07:37:57.651738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.702 [2024-11-20 07:37:57.651832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.702 [2024-11-20 07:37:57.651850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:33.702 [2024-11-20 07:37:57.651863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.702 [2024-11-20 07:37:57.651874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.702 [2024-11-20 07:37:57.651927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.702 [2024-11-20 07:37:57.651943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:33.702 [2024-11-20 07:37:57.651960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.702 [2024-11-20 07:37:57.651972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.702 [2024-11-20 07:37:57.652021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:33.702 [2024-11-20 07:37:57.652035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:33.702 [2024-11-20 07:37:57.652047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:33.702 [2024-11-20 07:37:57.652059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:33.702 [2024-11-20 07:37:57.652214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 712.419 ms, result 0 00:40:34.638 00:40:34.638 00:40:34.897 07:37:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:40:36.797 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:40:36.797 07:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:36.797 [2024-11-20 07:38:00.923140] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:40:36.797 [2024-11-20 07:38:00.923334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80590 ] 00:40:37.055 [2024-11-20 07:38:01.115688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:37.313 [2024-11-20 07:38:01.286871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.571 [2024-11-20 07:38:01.668311] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:37.571 [2024-11-20 07:38:01.668389] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:37.830 [2024-11-20 07:38:01.832776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.832854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:37.830 [2024-11-20 07:38:01.832877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:37.830 [2024-11-20 07:38:01.832888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.830 [2024-11-20 07:38:01.832953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.832967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:37.830 [2024-11-20 07:38:01.832982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:40:37.830 [2024-11-20 07:38:01.832992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.830 [2024-11-20 07:38:01.833015] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:37.830 [2024-11-20 07:38:01.834239] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:37.830 [2024-11-20 07:38:01.834273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.834286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:37.830 [2024-11-20 07:38:01.834298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:40:37.830 [2024-11-20 07:38:01.834309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.830 [2024-11-20 07:38:01.835841] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:37.830 [2024-11-20 07:38:01.857472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.857547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:37.830 [2024-11-20 07:38:01.857565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.626 ms 00:40:37.830 [2024-11-20 07:38:01.857577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.830 [2024-11-20 07:38:01.857716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.857731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:37.830 [2024-11-20 07:38:01.857743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:40:37.830 [2024-11-20 07:38:01.857755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.830 [2024-11-20 07:38:01.865702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.865759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:37.830 [2024-11-20 07:38:01.865775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.800 ms 00:40:37.830 [2024-11-20 07:38:01.865786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.830 [2024-11-20 07:38:01.865932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.865951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:37.830 [2024-11-20 07:38:01.865964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:40:37.830 [2024-11-20 07:38:01.865977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.830 [2024-11-20 07:38:01.866046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.866059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:37.830 [2024-11-20 07:38:01.866072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:37.830 [2024-11-20 07:38:01.866083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.830 [2024-11-20 07:38:01.866126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:37.830 [2024-11-20 07:38:01.871705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.830 [2024-11-20 07:38:01.871753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:37.830 [2024-11-20 07:38:01.871768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.586 ms 00:40:37.830 [2024-11-20 07:38:01.871784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.831 [2024-11-20 07:38:01.871841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.831 [2024-11-20 07:38:01.871854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:37.831 [2024-11-20 07:38:01.871867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:40:37.831 [2024-11-20 07:38:01.871878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.831 [2024-11-20 07:38:01.871960] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:37.831 [2024-11-20 07:38:01.871986] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:37.831 [2024-11-20 07:38:01.872046] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:37.831 [2024-11-20 07:38:01.872070] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:37.831 [2024-11-20 07:38:01.872196] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:37.831 [2024-11-20 07:38:01.872222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:37.831 [2024-11-20 07:38:01.872238] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:37.831 [2024-11-20 07:38:01.872254] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872268] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872281] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:37.831 [2024-11-20 07:38:01.872293] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:37.831 [2024-11-20 07:38:01.872305] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:37.831 [2024-11-20 07:38:01.872316] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:37.831 [2024-11-20 07:38:01.872335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.831 [2024-11-20 07:38:01.872347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:37.831 [2024-11-20 07:38:01.872360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:40:37.831 [2024-11-20 07:38:01.872371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.831 [2024-11-20 07:38:01.872466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.831 [2024-11-20 07:38:01.872479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:37.831 [2024-11-20 07:38:01.872491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:40:37.831 [2024-11-20 07:38:01.872502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.831 [2024-11-20 07:38:01.872617] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:37.831 [2024-11-20 07:38:01.872637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:37.831 [2024-11-20 07:38:01.872649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:37.831 [2024-11-20 07:38:01.872686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:37.831 [2024-11-20 07:38:01.872720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:37.831 [2024-11-20 07:38:01.872742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:37.831 [2024-11-20 07:38:01.872754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:37.831 [2024-11-20 07:38:01.872764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:37.831 [2024-11-20 07:38:01.872775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:37.831 [2024-11-20 07:38:01.872786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:37.831 [2024-11-20 07:38:01.872808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:37.831 [2024-11-20 07:38:01.872845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:37.831 [2024-11-20 07:38:01.872877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:37.831 [2024-11-20 07:38:01.872910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:37.831 [2024-11-20 07:38:01.872941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:37.831 [2024-11-20 07:38:01.872973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:37.831 [2024-11-20 07:38:01.872983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:37.831 [2024-11-20 07:38:01.872994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:37.831 [2024-11-20 07:38:01.873005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:37.831 [2024-11-20 07:38:01.873015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:37.831 [2024-11-20 07:38:01.873026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:37.831 [2024-11-20 07:38:01.873037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:37.831 [2024-11-20 07:38:01.873047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:37.831 [2024-11-20 07:38:01.873060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:37.831 [2024-11-20 07:38:01.873071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:37.831 [2024-11-20 07:38:01.873081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:37.831 [2024-11-20 07:38:01.873092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:37.831 [2024-11-20 07:38:01.873103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:37.831 [2024-11-20 07:38:01.873113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:37.831 [2024-11-20 07:38:01.873124] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:37.831 [2024-11-20 07:38:01.873136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:37.831 [2024-11-20 07:38:01.873147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:37.831 [2024-11-20 07:38:01.873159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:37.831 [2024-11-20 07:38:01.873171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:37.831 [2024-11-20 07:38:01.873182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:37.831 [2024-11-20 07:38:01.873193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:37.831 [2024-11-20 07:38:01.873204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:37.831 [2024-11-20 07:38:01.873214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:37.831 [2024-11-20 07:38:01.873225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:37.831 [2024-11-20 07:38:01.873237] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:37.831 [2024-11-20 07:38:01.873252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:37.831 [2024-11-20 07:38:01.873265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:37.831 [2024-11-20 07:38:01.873277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:37.831 [2024-11-20 07:38:01.873289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:37.831 [2024-11-20 07:38:01.873301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:37.831 [2024-11-20 07:38:01.873312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:37.831 [2024-11-20 07:38:01.873324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:37.831 [2024-11-20 07:38:01.873336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:37.831 [2024-11-20 07:38:01.873348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:37.831 [2024-11-20 07:38:01.873360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:37.831 [2024-11-20 07:38:01.873372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:37.831 [2024-11-20 07:38:01.873384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:37.831 [2024-11-20 07:38:01.873395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:37.831 [2024-11-20 07:38:01.873407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:37.831 [2024-11-20 07:38:01.873420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:37.831 [2024-11-20 07:38:01.873432] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:37.831 [2024-11-20 07:38:01.873449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:37.831 [2024-11-20 07:38:01.873462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:37.832 [2024-11-20 07:38:01.873474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:37.832 [2024-11-20 07:38:01.873486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:37.832 [2024-11-20 07:38:01.873498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:37.832 [2024-11-20 07:38:01.873510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:01.873522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:37.832 [2024-11-20 07:38:01.873534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:40:37.832 [2024-11-20 07:38:01.873546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.832 [2024-11-20 07:38:01.915743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:01.915799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:37.832 [2024-11-20 07:38:01.915844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.137 ms 00:40:37.832 [2024-11-20 07:38:01.915857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.832 [2024-11-20 07:38:01.915984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:01.915996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:37.832 [2024-11-20 07:38:01.916007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:40:37.832 [2024-11-20 07:38:01.916018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.832 [2024-11-20 07:38:01.977762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:01.977851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:37.832 [2024-11-20 07:38:01.977868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.655 ms 00:40:37.832 [2024-11-20 07:38:01.977880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.832 [2024-11-20 07:38:01.977947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:01.977960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:37.832 [2024-11-20 07:38:01.977972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:37.832 [2024-11-20 07:38:01.977988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.832 [2024-11-20 07:38:01.978532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:01.978555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:37.832 [2024-11-20 07:38:01.978568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:40:37.832 [2024-11-20 07:38:01.978579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.832 [2024-11-20 07:38:01.978713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:01.978729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:37.832 [2024-11-20 07:38:01.978741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:40:37.832 [2024-11-20 07:38:01.978759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.832 [2024-11-20 07:38:02.000917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:02.000969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:37.832 [2024-11-20 07:38:02.000992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.131 ms 00:40:37.832 [2024-11-20 07:38:02.001004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:37.832 [2024-11-20 07:38:02.024541] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:37.832 [2024-11-20 07:38:02.024612] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:37.832 [2024-11-20 07:38:02.024635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:37.832 [2024-11-20 07:38:02.024647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:37.832 [2024-11-20 07:38:02.024662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.462 ms 00:40:37.832 [2024-11-20 07:38:02.024673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.058684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.058781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:38.091 [2024-11-20 07:38:02.058801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.908 ms 00:40:38.091 [2024-11-20 07:38:02.058832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.081830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.081905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:38.091 [2024-11-20 07:38:02.081922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:40:38.091 [2024-11-20 07:38:02.081933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.102741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.102810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:38.091 [2024-11-20 07:38:02.102853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.712 ms 00:40:38.091 [2024-11-20 07:38:02.102866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.103826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.103880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:38.091 [2024-11-20 07:38:02.103895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:40:38.091 [2024-11-20 07:38:02.103913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.198653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.198729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:38.091 [2024-11-20 07:38:02.198760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.701 ms 00:40:38.091 [2024-11-20 07:38:02.198773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.213304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:38.091 [2024-11-20 07:38:02.216861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.216904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:38.091 [2024-11-20 07:38:02.216920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:40:38.091 [2024-11-20 07:38:02.216932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.217082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.217096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:38.091 [2024-11-20 07:38:02.217108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:38.091 [2024-11-20 07:38:02.217123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.218037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.218066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:38.091 [2024-11-20 07:38:02.218082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:40:38.091 [2024-11-20 07:38:02.218123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.218158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.218171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:38.091 [2024-11-20 07:38:02.218182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:38.091 [2024-11-20 07:38:02.218193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.218232] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:38.091 [2024-11-20 07:38:02.218250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.218261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:38.091 [2024-11-20 07:38:02.218272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:40:38.091 [2024-11-20 07:38:02.218283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.260155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.260245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:38.091 [2024-11-20 07:38:02.260262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.845 ms 00:40:38.091 [2024-11-20 07:38:02.260287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.260419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:38.091 [2024-11-20 07:38:02.260433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:38.091 [2024-11-20 07:38:02.260445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:40:38.091 [2024-11-20 07:38:02.260456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:38.091 [2024-11-20 07:38:02.261901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 428.561 ms, result 0 00:40:39.500  [2024-11-20T07:38:04.638Z] Copying: 33/1024 [MB] (33 MBps) [2024-11-20T07:38:05.575Z] Copying: 65/1024 [MB] (32 MBps) [2024-11-20T07:38:06.511Z] Copying: 98/1024 [MB] (33 MBps) [2024-11-20T07:38:07.884Z] Copying: 131/1024 [MB] (32 MBps) [2024-11-20T07:38:08.820Z] Copying: 164/1024 [MB] (32 MBps) [2024-11-20T07:38:09.755Z] Copying: 198/1024 [MB] (33 MBps) [2024-11-20T07:38:10.690Z] Copying: 231/1024 [MB] (33 MBps) [2024-11-20T07:38:11.626Z] Copying: 265/1024 [MB] (33 MBps) [2024-11-20T07:38:12.562Z] Copying: 297/1024 [MB] (32 MBps) [2024-11-20T07:38:13.939Z] Copying: 328/1024 [MB] (31 MBps) [2024-11-20T07:38:14.507Z] Copying: 360/1024 [MB] (31 MBps) [2024-11-20T07:38:15.885Z] Copying: 392/1024 [MB] (32 MBps) [2024-11-20T07:38:16.822Z] Copying: 424/1024 [MB] (31 MBps) [2024-11-20T07:38:17.758Z] Copying: 456/1024 [MB] (32 MBps) [2024-11-20T07:38:18.697Z] Copying: 491/1024 [MB] (34 MBps) [2024-11-20T07:38:19.635Z] Copying: 525/1024 [MB] (34 MBps) [2024-11-20T07:38:20.571Z] Copying: 559/1024 [MB] (33 MBps) [2024-11-20T07:38:21.506Z] Copying: 591/1024 [MB] (32 MBps) [2024-11-20T07:38:22.883Z] Copying: 625/1024 [MB] (33 MBps) [2024-11-20T07:38:23.818Z] Copying: 658/1024 [MB] (33 MBps) [2024-11-20T07:38:24.754Z] Copying: 692/1024 [MB] (33 MBps) [2024-11-20T07:38:25.691Z] Copying: 726/1024 [MB] (33 MBps) [2024-11-20T07:38:26.668Z] Copying: 761/1024 [MB] (34 MBps) [2024-11-20T07:38:27.605Z] Copying: 796/1024 [MB] (35 MBps) [2024-11-20T07:38:28.542Z] Copying: 828/1024 [MB] (32 MBps) [2024-11-20T07:38:29.919Z] Copying: 861/1024 [MB] (32 MBps) [2024-11-20T07:38:30.858Z] Copying: 890/1024 [MB] (29 MBps) [2024-11-20T07:38:31.793Z] Copying: 924/1024 [MB] (33 MBps) [2024-11-20T07:38:32.730Z] Copying: 952/1024 [MB] (28 MBps) [2024-11-20T07:38:33.682Z] Copying: 983/1024 [MB] (31 MBps) [2024-11-20T07:38:33.944Z] Copying: 1013/1024 [MB] (30 MBps) [2024-11-20T07:38:34.204Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 07:38:34.002163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.002247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:10.001 [2024-11-20 07:38:34.002270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:10.001 [2024-11-20 07:38:34.002284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.002315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:10.001 [2024-11-20 07:38:34.007650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.007726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:10.001 [2024-11-20 07:38:34.007764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.308 ms 00:41:10.001 [2024-11-20 07:38:34.007781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.008098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.008120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:10.001 [2024-11-20 07:38:34.008140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:41:10.001 [2024-11-20 07:38:34.008157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.012056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.012102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:10.001 [2024-11-20 07:38:34.012121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.873 ms 00:41:10.001 [2024-11-20 07:38:34.012140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.019383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.019458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:10.001 [2024-11-20 07:38:34.019476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.200 ms 00:41:10.001 [2024-11-20 07:38:34.019489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.064010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.064114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:10.001 [2024-11-20 07:38:34.064135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.395 ms 00:41:10.001 [2024-11-20 07:38:34.064149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.089181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.089277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:10.001 [2024-11-20 07:38:34.089299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.935 ms 00:41:10.001 [2024-11-20 07:38:34.089313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.091454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.091535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:10.001 [2024-11-20 07:38:34.091554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.039 ms 00:41:10.001 [2024-11-20 07:38:34.091569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.135902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.136013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:10.001 [2024-11-20 07:38:34.136034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.302 ms 00:41:10.001 [2024-11-20 07:38:34.136048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.001 [2024-11-20 07:38:34.180070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.001 [2024-11-20 07:38:34.180183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:10.001 [2024-11-20 07:38:34.180204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.921 ms 00:41:10.001 [2024-11-20 07:38:34.180217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.263 [2024-11-20 07:38:34.224010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.263 [2024-11-20 07:38:34.224106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:10.263 [2024-11-20 07:38:34.224127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.703 ms 00:41:10.263 [2024-11-20 07:38:34.224140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.263 [2024-11-20 07:38:34.267472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.263 [2024-11-20 07:38:34.267565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:10.263 [2024-11-20 07:38:34.267588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.126 ms 00:41:10.263 [2024-11-20 07:38:34.267602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.263 [2024-11-20 07:38:34.267707] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:10.263 [2024-11-20 07:38:34.267731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:41:10.263 [2024-11-20 07:38:34.267764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:41:10.263 [2024-11-20 07:38:34.267780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.267991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:10.263 [2024-11-20 07:38:34.268804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.268997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:10.264 [2024-11-20 07:38:34.269187] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:10.264 [2024-11-20 07:38:34.269206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2a0b8524-eff8-4987-9ed1-fbd226a5ac54 00:41:10.264 [2024-11-20 07:38:34.269238] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:41:10.264 [2024-11-20 07:38:34.269252] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:10.264 [2024-11-20 07:38:34.269265] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:10.264 [2024-11-20 07:38:34.269279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:10.264 [2024-11-20 07:38:34.269292] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:10.264 [2024-11-20 07:38:34.269308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:10.264 [2024-11-20 07:38:34.269338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:10.264 [2024-11-20 07:38:34.269351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:10.264 [2024-11-20 07:38:34.269364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:10.264 [2024-11-20 07:38:34.269379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.264 [2024-11-20 07:38:34.269393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:10.264 [2024-11-20 07:38:34.269408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:41:10.264 [2024-11-20 07:38:34.269422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.264 [2024-11-20 07:38:34.292691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.264 [2024-11-20 07:38:34.292793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:10.264 [2024-11-20 07:38:34.292840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.192 ms 00:41:10.264 [2024-11-20 07:38:34.292855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.264 [2024-11-20 07:38:34.293544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:10.264 [2024-11-20 07:38:34.293567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:10.264 [2024-11-20 07:38:34.293597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:41:10.264 [2024-11-20 07:38:34.293611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.264 [2024-11-20 07:38:34.352712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.264 [2024-11-20 07:38:34.352812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:10.264 [2024-11-20 07:38:34.352842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.264 [2024-11-20 07:38:34.352857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.264 [2024-11-20 07:38:34.352944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.264 [2024-11-20 07:38:34.352958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:10.264 [2024-11-20 07:38:34.352981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.264 [2024-11-20 07:38:34.352994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.264 [2024-11-20 07:38:34.353134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.264 [2024-11-20 07:38:34.353152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:10.264 [2024-11-20 07:38:34.353167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.264 [2024-11-20 07:38:34.353181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.264 [2024-11-20 07:38:34.353204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.264 [2024-11-20 07:38:34.353218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:10.264 [2024-11-20 07:38:34.353232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.264 [2024-11-20 07:38:34.353269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.491307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.524 [2024-11-20 07:38:34.491415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:10.524 [2024-11-20 07:38:34.491437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.524 [2024-11-20 07:38:34.491451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.606150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.524 [2024-11-20 07:38:34.606230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:10.524 [2024-11-20 07:38:34.606268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.524 [2024-11-20 07:38:34.606299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.606420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.524 [2024-11-20 07:38:34.606437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:10.524 [2024-11-20 07:38:34.606452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.524 [2024-11-20 07:38:34.606467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.606541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.524 [2024-11-20 07:38:34.606557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:10.524 [2024-11-20 07:38:34.606572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.524 [2024-11-20 07:38:34.606585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.606731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.524 [2024-11-20 07:38:34.606751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:10.524 [2024-11-20 07:38:34.606767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.524 [2024-11-20 07:38:34.606780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.606858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.524 [2024-11-20 07:38:34.606877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:10.524 [2024-11-20 07:38:34.606891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.524 [2024-11-20 07:38:34.606905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.606960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.524 [2024-11-20 07:38:34.606976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:10.524 [2024-11-20 07:38:34.606990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.524 [2024-11-20 07:38:34.607004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.607059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:10.524 [2024-11-20 07:38:34.607075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:10.524 [2024-11-20 07:38:34.607089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:10.524 [2024-11-20 07:38:34.607103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:10.524 [2024-11-20 07:38:34.607251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 605.051 ms, result 0 00:41:11.902 00:41:11.902 00:41:11.902 07:38:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:41:13.805 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:41:13.805 07:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:41:13.805 07:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:41:13.805 07:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:13.805 07:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:41:13.805 07:38:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:41:14.063 07:38:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:14.063 07:38:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:41:14.063 07:38:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78902 00:41:14.063 07:38:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78902 ']' 00:41:14.063 07:38:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 78902 00:41:14.063 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78902) - No such process 00:41:14.063 Process with pid 78902 is not found 00:41:14.063 07:38:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 78902 is not found' 00:41:14.063 07:38:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:41:14.322 Remove shared memory files 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:41:14.322 00:41:14.322 real 3m21.093s 00:41:14.322 user 3m46.669s 00:41:14.322 sys 0m39.401s 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:14.322 07:38:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:41:14.322 ************************************ 00:41:14.322 END TEST ftl_dirty_shutdown 00:41:14.322 ************************************ 00:41:14.322 07:38:38 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:41:14.322 07:38:38 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:14.322 07:38:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:14.322 07:38:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:14.322 ************************************ 00:41:14.322 START TEST ftl_upgrade_shutdown 00:41:14.322 ************************************ 00:41:14.322 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:41:14.581 * Looking for test storage... 00:41:14.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:14.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.581 --rc genhtml_branch_coverage=1 00:41:14.581 --rc genhtml_function_coverage=1 00:41:14.581 --rc genhtml_legend=1 00:41:14.581 --rc geninfo_all_blocks=1 00:41:14.581 --rc geninfo_unexecuted_blocks=1 00:41:14.581 00:41:14.581 ' 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:14.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.581 --rc genhtml_branch_coverage=1 00:41:14.581 --rc genhtml_function_coverage=1 00:41:14.581 --rc genhtml_legend=1 00:41:14.581 --rc geninfo_all_blocks=1 00:41:14.581 --rc geninfo_unexecuted_blocks=1 00:41:14.581 00:41:14.581 ' 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:14.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.581 --rc genhtml_branch_coverage=1 00:41:14.581 --rc genhtml_function_coverage=1 00:41:14.581 --rc genhtml_legend=1 00:41:14.581 --rc geninfo_all_blocks=1 00:41:14.581 --rc geninfo_unexecuted_blocks=1 00:41:14.581 00:41:14.581 ' 00:41:14.581 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:14.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:14.582 --rc genhtml_branch_coverage=1 00:41:14.582 --rc genhtml_function_coverage=1 00:41:14.582 --rc genhtml_legend=1 00:41:14.582 --rc geninfo_all_blocks=1 00:41:14.582 --rc geninfo_unexecuted_blocks=1 00:41:14.582 00:41:14.582 ' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81036 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81036 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81036 ']' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:14.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:14.582 07:38:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:41:14.841 [2024-11-20 07:38:38.859093] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:41:14.841 [2024-11-20 07:38:38.859317] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81036 ] 00:41:15.100 [2024-11-20 07:38:39.068019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.100 [2024-11-20 07:38:39.226671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.069 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:41:16.070 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:41:16.647 { 00:41:16.647 "name": "basen1", 00:41:16.647 "aliases": [ 00:41:16.647 "1a80ae2f-dfc8-47f8-89d2-2911436211c6" 00:41:16.647 ], 00:41:16.647 "product_name": "NVMe disk", 00:41:16.647 "block_size": 4096, 00:41:16.647 "num_blocks": 1310720, 00:41:16.647 "uuid": "1a80ae2f-dfc8-47f8-89d2-2911436211c6", 00:41:16.647 "numa_id": -1, 00:41:16.647 "assigned_rate_limits": { 00:41:16.647 "rw_ios_per_sec": 0, 00:41:16.647 "rw_mbytes_per_sec": 0, 00:41:16.647 "r_mbytes_per_sec": 0, 00:41:16.647 "w_mbytes_per_sec": 0 00:41:16.647 }, 00:41:16.647 "claimed": true, 00:41:16.647 "claim_type": "read_many_write_one", 00:41:16.647 "zoned": false, 00:41:16.647 "supported_io_types": { 00:41:16.647 "read": true, 00:41:16.647 "write": true, 00:41:16.647 "unmap": true, 00:41:16.647 "flush": true, 00:41:16.647 "reset": true, 00:41:16.647 "nvme_admin": true, 00:41:16.647 "nvme_io": true, 00:41:16.647 "nvme_io_md": false, 00:41:16.647 "write_zeroes": true, 00:41:16.647 "zcopy": false, 00:41:16.647 "get_zone_info": false, 00:41:16.647 "zone_management": false, 00:41:16.647 "zone_append": false, 00:41:16.647 "compare": true, 00:41:16.647 "compare_and_write": false, 00:41:16.647 "abort": true, 00:41:16.647 "seek_hole": false, 00:41:16.647 "seek_data": false, 00:41:16.647 "copy": true, 00:41:16.647 "nvme_iov_md": false 00:41:16.647 }, 00:41:16.647 "driver_specific": { 00:41:16.647 "nvme": [ 00:41:16.647 { 00:41:16.647 "pci_address": "0000:00:11.0", 00:41:16.647 "trid": { 00:41:16.647 "trtype": "PCIe", 00:41:16.647 "traddr": "0000:00:11.0" 00:41:16.647 }, 00:41:16.647 "ctrlr_data": { 00:41:16.647 "cntlid": 0, 00:41:16.647 "vendor_id": "0x1b36", 00:41:16.647 "model_number": "QEMU NVMe Ctrl", 00:41:16.647 "serial_number": "12341", 00:41:16.647 "firmware_revision": "8.0.0", 00:41:16.647 "subnqn": "nqn.2019-08.org.qemu:12341", 00:41:16.647 "oacs": { 00:41:16.647 "security": 0, 00:41:16.647 "format": 1, 00:41:16.647 "firmware": 0, 00:41:16.647 "ns_manage": 1 00:41:16.647 }, 00:41:16.647 "multi_ctrlr": false, 00:41:16.647 "ana_reporting": false 00:41:16.647 }, 00:41:16.647 "vs": { 00:41:16.647 "nvme_version": "1.4" 00:41:16.647 }, 00:41:16.647 "ns_data": { 00:41:16.647 "id": 1, 00:41:16.647 "can_share": false 00:41:16.647 } 00:41:16.647 } 00:41:16.647 ], 00:41:16.647 "mp_policy": "active_passive" 00:41:16.647 } 00:41:16.647 } 00:41:16.647 ]' 00:41:16.647 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:16.906 07:38:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:41:17.164 07:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e8d9f660-54f1-4000-8d08-889fab77b361 00:41:17.164 07:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:41:17.164 07:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8d9f660-54f1-4000-8d08-889fab77b361 00:41:17.423 07:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:41:17.682 07:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0edf62e4-1f8e-407d-93f5-bc206b4638a0 00:41:17.682 07:38:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0edf62e4-1f8e-407d-93f5-bc206b4638a0 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=21661365-5538-456a-92cd-9d79b87e7ad7 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 21661365-5538-456a-92cd-9d79b87e7ad7 ]] 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 21661365-5538-456a-92cd-9d79b87e7ad7 5120 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=21661365-5538-456a-92cd-9d79b87e7ad7 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 21661365-5538-456a-92cd-9d79b87e7ad7 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=21661365-5538-456a-92cd-9d79b87e7ad7 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:41:17.941 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 21661365-5538-456a-92cd-9d79b87e7ad7 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:41:18.200 { 00:41:18.200 "name": "21661365-5538-456a-92cd-9d79b87e7ad7", 00:41:18.200 "aliases": [ 00:41:18.200 "lvs/basen1p0" 00:41:18.200 ], 00:41:18.200 "product_name": "Logical Volume", 00:41:18.200 "block_size": 4096, 00:41:18.200 "num_blocks": 5242880, 00:41:18.200 "uuid": "21661365-5538-456a-92cd-9d79b87e7ad7", 00:41:18.200 "assigned_rate_limits": { 00:41:18.200 "rw_ios_per_sec": 0, 00:41:18.200 "rw_mbytes_per_sec": 0, 00:41:18.200 "r_mbytes_per_sec": 0, 00:41:18.200 "w_mbytes_per_sec": 0 00:41:18.200 }, 00:41:18.200 "claimed": false, 00:41:18.200 "zoned": false, 00:41:18.200 "supported_io_types": { 00:41:18.200 "read": true, 00:41:18.200 "write": true, 00:41:18.200 "unmap": true, 00:41:18.200 "flush": false, 00:41:18.200 "reset": true, 00:41:18.200 "nvme_admin": false, 00:41:18.200 "nvme_io": false, 00:41:18.200 "nvme_io_md": false, 00:41:18.200 "write_zeroes": true, 00:41:18.200 "zcopy": false, 00:41:18.200 "get_zone_info": false, 00:41:18.200 "zone_management": false, 00:41:18.200 "zone_append": false, 00:41:18.200 "compare": false, 00:41:18.200 "compare_and_write": false, 00:41:18.200 "abort": false, 00:41:18.200 "seek_hole": true, 00:41:18.200 "seek_data": true, 00:41:18.200 "copy": false, 00:41:18.200 "nvme_iov_md": false 00:41:18.200 }, 00:41:18.200 "driver_specific": { 00:41:18.200 "lvol": { 00:41:18.200 "lvol_store_uuid": "0edf62e4-1f8e-407d-93f5-bc206b4638a0", 00:41:18.200 "base_bdev": "basen1", 00:41:18.200 "thin_provision": true, 00:41:18.200 "num_allocated_clusters": 0, 00:41:18.200 "snapshot": false, 00:41:18.200 "clone": false, 00:41:18.200 "esnap_clone": false 00:41:18.200 } 00:41:18.200 } 00:41:18.200 } 00:41:18.200 ]' 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:41:18.200 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:41:18.201 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:41:18.459 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:41:18.459 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:41:18.459 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:41:18.718 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:41:18.718 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:41:18.718 07:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 21661365-5538-456a-92cd-9d79b87e7ad7 -c cachen1p0 --l2p_dram_limit 2 00:41:18.977 [2024-11-20 07:38:43.159901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.159996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:41:18.977 [2024-11-20 07:38:43.160021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:41:18.977 [2024-11-20 07:38:43.160036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.160121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.160136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:41:18.977 [2024-11-20 07:38:43.160154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:41:18.977 [2024-11-20 07:38:43.160168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.160201] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:41:18.977 [2024-11-20 07:38:43.161443] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:41:18.977 [2024-11-20 07:38:43.161497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.161513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:41:18.977 [2024-11-20 07:38:43.161532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.298 ms 00:41:18.977 [2024-11-20 07:38:43.161547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.161748] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 70c64441-8edb-41f9-9b4f-ea726476b71f 00:41:18.977 [2024-11-20 07:38:43.163484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.163535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:41:18.977 [2024-11-20 07:38:43.163554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:41:18.977 [2024-11-20 07:38:43.163571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.171587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.171671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:41:18.977 [2024-11-20 07:38:43.171694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.884 ms 00:41:18.977 [2024-11-20 07:38:43.171711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.171781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.171806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:41:18.977 [2024-11-20 07:38:43.171821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:41:18.977 [2024-11-20 07:38:43.171853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.171916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.171935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:41:18.977 [2024-11-20 07:38:43.171949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:41:18.977 [2024-11-20 07:38:43.171974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.172010] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:41:18.977 [2024-11-20 07:38:43.177594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.177654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:41:18.977 [2024-11-20 07:38:43.177677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.589 ms 00:41:18.977 [2024-11-20 07:38:43.177691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.177740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:18.977 [2024-11-20 07:38:43.177754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:41:18.977 [2024-11-20 07:38:43.177772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:41:18.977 [2024-11-20 07:38:43.177785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:18.977 [2024-11-20 07:38:43.177888] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:41:18.977 [2024-11-20 07:38:43.178038] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:41:18.977 [2024-11-20 07:38:43.178082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:41:18.977 [2024-11-20 07:38:43.178101] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:41:19.237 [2024-11-20 07:38:43.178134] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:41:19.237 [2024-11-20 07:38:43.178151] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:41:19.237 [2024-11-20 07:38:43.178170] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:41:19.237 [2024-11-20 07:38:43.178184] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:41:19.237 [2024-11-20 07:38:43.178206] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:41:19.237 [2024-11-20 07:38:43.178220] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:41:19.237 [2024-11-20 07:38:43.178238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:19.237 [2024-11-20 07:38:43.178252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:41:19.237 [2024-11-20 07:38:43.178271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:41:19.237 [2024-11-20 07:38:43.178285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:19.237 [2024-11-20 07:38:43.178381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:19.237 [2024-11-20 07:38:43.178403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:41:19.237 [2024-11-20 07:38:43.178423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:41:19.237 [2024-11-20 07:38:43.178454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:19.237 [2024-11-20 07:38:43.178580] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:41:19.237 [2024-11-20 07:38:43.178599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:41:19.237 [2024-11-20 07:38:43.178617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:19.237 [2024-11-20 07:38:43.178632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.237 [2024-11-20 07:38:43.178649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:41:19.237 [2024-11-20 07:38:43.178663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:41:19.237 [2024-11-20 07:38:43.178680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:41:19.238 [2024-11-20 07:38:43.178694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:41:19.238 [2024-11-20 07:38:43.178711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:41:19.238 [2024-11-20 07:38:43.178724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.178741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:41:19.238 [2024-11-20 07:38:43.178754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:41:19.238 [2024-11-20 07:38:43.178771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.178784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:41:19.238 [2024-11-20 07:38:43.178801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:41:19.238 [2024-11-20 07:38:43.178834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.178856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:41:19.238 [2024-11-20 07:38:43.178869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:41:19.238 [2024-11-20 07:38:43.178887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.178901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:41:19.238 [2024-11-20 07:38:43.178919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:41:19.238 [2024-11-20 07:38:43.178934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:19.238 [2024-11-20 07:38:43.178951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:41:19.238 [2024-11-20 07:38:43.178965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:41:19.238 [2024-11-20 07:38:43.178982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:19.238 [2024-11-20 07:38:43.178995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:41:19.238 [2024-11-20 07:38:43.179012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:41:19.238 [2024-11-20 07:38:43.179025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:19.238 [2024-11-20 07:38:43.179042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:41:19.238 [2024-11-20 07:38:43.179055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:41:19.238 [2024-11-20 07:38:43.179072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:41:19.238 [2024-11-20 07:38:43.179086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:41:19.238 [2024-11-20 07:38:43.179105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:41:19.238 [2024-11-20 07:38:43.179118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.179135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:41:19.238 [2024-11-20 07:38:43.179148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:41:19.238 [2024-11-20 07:38:43.179165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.179178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:41:19.238 [2024-11-20 07:38:43.179194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:41:19.238 [2024-11-20 07:38:43.179207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.179223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:41:19.238 [2024-11-20 07:38:43.179236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:41:19.238 [2024-11-20 07:38:43.179252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.179265] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:41:19.238 [2024-11-20 07:38:43.179283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:41:19.238 [2024-11-20 07:38:43.179297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:41:19.238 [2024-11-20 07:38:43.179316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:41:19.238 [2024-11-20 07:38:43.179331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:41:19.238 [2024-11-20 07:38:43.179350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:41:19.238 [2024-11-20 07:38:43.179363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:41:19.238 [2024-11-20 07:38:43.179380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:41:19.238 [2024-11-20 07:38:43.179393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:41:19.238 [2024-11-20 07:38:43.179410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:41:19.238 [2024-11-20 07:38:43.179432] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:41:19.238 [2024-11-20 07:38:43.179453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:41:19.238 [2024-11-20 07:38:43.179492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:41:19.238 [2024-11-20 07:38:43.179540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:41:19.238 [2024-11-20 07:38:43.179558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:41:19.238 [2024-11-20 07:38:43.179573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:41:19.238 [2024-11-20 07:38:43.179591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:41:19.238 [2024-11-20 07:38:43.179707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:41:19.238 [2024-11-20 07:38:43.179727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:19.238 [2024-11-20 07:38:43.179760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:41:19.238 [2024-11-20 07:38:43.179775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:41:19.238 [2024-11-20 07:38:43.179793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:41:19.238 [2024-11-20 07:38:43.179808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:19.238 [2024-11-20 07:38:43.179837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:41:19.238 [2024-11-20 07:38:43.179851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.299 ms 00:41:19.238 [2024-11-20 07:38:43.179869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:19.238 [2024-11-20 07:38:43.179926] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:41:19.238 [2024-11-20 07:38:43.179949] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:41:21.771 [2024-11-20 07:38:45.534396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.534500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:41:21.771 [2024-11-20 07:38:45.534523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2354.453 ms 00:41:21.771 [2024-11-20 07:38:45.534542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.576143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.576243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:41:21.771 [2024-11-20 07:38:45.576265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.213 ms 00:41:21.771 [2024-11-20 07:38:45.576282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.576424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.576445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:41:21.771 [2024-11-20 07:38:45.576460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:41:21.771 [2024-11-20 07:38:45.576480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.626100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.626186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:41:21.771 [2024-11-20 07:38:45.626223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.558 ms 00:41:21.771 [2024-11-20 07:38:45.626242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.626309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.626335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:41:21.771 [2024-11-20 07:38:45.626351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:21.771 [2024-11-20 07:38:45.626368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.626960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.626995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:41:21.771 [2024-11-20 07:38:45.627011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.477 ms 00:41:21.771 [2024-11-20 07:38:45.627029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.627097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.627116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:41:21.771 [2024-11-20 07:38:45.627135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:41:21.771 [2024-11-20 07:38:45.627156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.649159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.649261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:41:21.771 [2024-11-20 07:38:45.649280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.972 ms 00:41:21.771 [2024-11-20 07:38:45.649298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.664453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:41:21.771 [2024-11-20 07:38:45.665762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.665803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:41:21.771 [2024-11-20 07:38:45.665846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.305 ms 00:41:21.771 [2024-11-20 07:38:45.665862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.709085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.709196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:41:21.771 [2024-11-20 07:38:45.709222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.142 ms 00:41:21.771 [2024-11-20 07:38:45.709237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.709403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.709424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:41:21.771 [2024-11-20 07:38:45.709446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:41:21.771 [2024-11-20 07:38:45.709459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.754063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.754158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:41:21.771 [2024-11-20 07:38:45.754184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.488 ms 00:41:21.771 [2024-11-20 07:38:45.754199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.797997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.798109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:41:21.771 [2024-11-20 07:38:45.798147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.687 ms 00:41:21.771 [2024-11-20 07:38:45.798161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.799012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.799052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:41:21.771 [2024-11-20 07:38:45.799072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.753 ms 00:41:21.771 [2024-11-20 07:38:45.799086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.914271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.771 [2024-11-20 07:38:45.914378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:41:21.771 [2024-11-20 07:38:45.914409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 115.046 ms 00:41:21.771 [2024-11-20 07:38:45.914424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:21.771 [2024-11-20 07:38:45.958606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:21.772 [2024-11-20 07:38:45.958703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:41:21.772 [2024-11-20 07:38:45.958747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.981 ms 00:41:21.772 [2024-11-20 07:38:45.958762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:22.030 [2024-11-20 07:38:46.002625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:22.030 [2024-11-20 07:38:46.002716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:41:22.030 [2024-11-20 07:38:46.002740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.733 ms 00:41:22.030 [2024-11-20 07:38:46.002754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:22.030 [2024-11-20 07:38:46.045841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:22.030 [2024-11-20 07:38:46.045947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:41:22.030 [2024-11-20 07:38:46.045974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.943 ms 00:41:22.030 [2024-11-20 07:38:46.045993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:22.030 [2024-11-20 07:38:46.046105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:22.030 [2024-11-20 07:38:46.046131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:41:22.030 [2024-11-20 07:38:46.046154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:41:22.030 [2024-11-20 07:38:46.046167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:22.030 [2024-11-20 07:38:46.046352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:22.030 [2024-11-20 07:38:46.046368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:41:22.030 [2024-11-20 07:38:46.046410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:41:22.030 [2024-11-20 07:38:46.046424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:22.030 [2024-11-20 07:38:46.047731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2887.270 ms, result 0 00:41:22.030 { 00:41:22.030 "name": "ftl", 00:41:22.030 "uuid": "70c64441-8edb-41f9-9b4f-ea726476b71f" 00:41:22.030 } 00:41:22.030 07:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:41:22.289 [2024-11-20 07:38:46.354788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:22.289 07:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:41:22.547 07:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:41:22.805 [2024-11-20 07:38:46.803285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:41:22.805 07:38:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:41:23.063 [2024-11-20 07:38:47.078400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:23.063 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:41:23.321 Fill FTL, iteration 1 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81161 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81161 /var/tmp/spdk.tgt.sock 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81161 ']' 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:23.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:23.321 07:38:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:41:23.579 [2024-11-20 07:38:47.562569] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:41:23.580 [2024-11-20 07:38:47.562729] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81161 ] 00:41:23.580 [2024-11-20 07:38:47.749803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.838 [2024-11-20 07:38:47.920862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:24.773 07:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:24.773 07:38:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:41:24.773 07:38:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:41:25.031 ftln1 00:41:25.031 07:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:41:25.031 07:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81161 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81161 ']' 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81161 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81161 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:25.289 killing process with pid 81161 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81161' 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81161 00:41:25.289 07:38:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81161 00:41:27.896 07:38:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:41:27.896 07:38:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:41:28.181 [2024-11-20 07:38:52.145451] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:41:28.181 [2024-11-20 07:38:52.145633] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81219 ] 00:41:28.181 [2024-11-20 07:38:52.336103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.440 [2024-11-20 07:38:52.465982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:29.816  [2024-11-20T07:38:55.395Z] Copying: 213/1024 [MB] (213 MBps) [2024-11-20T07:38:56.332Z] Copying: 429/1024 [MB] (216 MBps) [2024-11-20T07:38:57.268Z] Copying: 637/1024 [MB] (208 MBps) [2024-11-20T07:38:57.836Z] Copying: 855/1024 [MB] (218 MBps) [2024-11-20T07:38:59.228Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:41:35.025 00:41:35.025 07:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:41:35.025 Calculate MD5 checksum, iteration 1 00:41:35.025 07:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:41:35.025 07:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:35.025 07:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:35.025 07:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:35.025 07:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:35.025 07:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:35.025 07:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:41:35.025 [2024-11-20 07:38:59.187936] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:41:35.025 [2024-11-20 07:38:59.188115] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81293 ] 00:41:35.284 [2024-11-20 07:38:59.384369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:35.543 [2024-11-20 07:38:59.516024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:36.919  [2024-11-20T07:39:02.057Z] Copying: 584/1024 [MB] (584 MBps) [2024-11-20T07:39:02.992Z] Copying: 1024/1024 [MB] (average 573 MBps) 00:41:38.789 00:41:38.789 07:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:41:38.789 07:39:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:41:40.688 Fill FTL, iteration 2 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8cd062133a36396af508c316b29b8642 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:40.688 07:39:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:41:41.001 [2024-11-20 07:39:04.906324] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:41:41.001 [2024-11-20 07:39:04.906785] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81356 ] 00:41:41.001 [2024-11-20 07:39:05.097002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.259 [2024-11-20 07:39:05.269756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:42.636  [2024-11-20T07:39:07.774Z] Copying: 191/1024 [MB] (191 MBps) [2024-11-20T07:39:09.211Z] Copying: 379/1024 [MB] (188 MBps) [2024-11-20T07:39:09.842Z] Copying: 569/1024 [MB] (190 MBps) [2024-11-20T07:39:10.775Z] Copying: 756/1024 [MB] (187 MBps) [2024-11-20T07:39:11.341Z] Copying: 935/1024 [MB] (179 MBps) [2024-11-20T07:39:12.733Z] Copying: 1024/1024 [MB] (average 186 MBps) 00:41:48.530 00:41:48.530 Calculate MD5 checksum, iteration 2 00:41:48.530 07:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:41:48.530 07:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:41:48.530 07:39:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:48.530 07:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:48.530 07:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:48.530 07:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:48.530 07:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:48.530 07:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:48.530 [2024-11-20 07:39:12.651310] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:41:48.530 [2024-11-20 07:39:12.651797] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81431 ] 00:41:48.788 [2024-11-20 07:39:12.840164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:48.788 [2024-11-20 07:39:12.973382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:50.693  [2024-11-20T07:39:15.833Z] Copying: 588/1024 [MB] (588 MBps) [2024-11-20T07:39:17.209Z] Copying: 1024/1024 [MB] (average 578 MBps) 00:41:53.006 00:41:53.006 07:39:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:41:53.006 07:39:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:54.915 07:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:41:54.915 07:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=06494b8b8f9136c6dbc408cbd81a301c 00:41:54.915 07:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:41:54.915 07:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:41:54.915 07:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:41:54.915 [2024-11-20 07:39:19.100314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:54.915 [2024-11-20 07:39:19.100405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:54.915 [2024-11-20 07:39:19.100424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:41:54.915 [2024-11-20 07:39:19.100436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:54.915 [2024-11-20 07:39:19.100469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:54.915 [2024-11-20 07:39:19.100482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:54.915 [2024-11-20 07:39:19.100494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:41:54.915 [2024-11-20 07:39:19.100511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:54.915 [2024-11-20 07:39:19.100535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:54.915 [2024-11-20 07:39:19.100547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:54.915 [2024-11-20 07:39:19.100558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:54.915 [2024-11-20 07:39:19.100569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:54.915 [2024-11-20 07:39:19.100638] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.326 ms, result 0 00:41:54.915 true 00:41:55.177 07:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:55.441 { 00:41:55.441 "name": "ftl", 00:41:55.441 "properties": [ 00:41:55.441 { 00:41:55.441 "name": "superblock_version", 00:41:55.441 "value": 5, 00:41:55.441 "read-only": true 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "name": "base_device", 00:41:55.441 "bands": [ 00:41:55.441 { 00:41:55.441 "id": 0, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 1, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 2, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 3, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 4, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 5, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 6, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 7, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 8, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 9, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 10, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 11, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 12, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 13, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 14, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 15, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 16, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 17, 00:41:55.441 "state": "FREE", 00:41:55.441 "validity": 0.0 00:41:55.441 } 00:41:55.441 ], 00:41:55.441 "read-only": true 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "name": "cache_device", 00:41:55.441 "type": "bdev", 00:41:55.441 "chunks": [ 00:41:55.441 { 00:41:55.441 "id": 0, 00:41:55.441 "state": "INACTIVE", 00:41:55.441 "utilization": 0.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 1, 00:41:55.441 "state": "CLOSED", 00:41:55.441 "utilization": 1.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 2, 00:41:55.441 "state": "CLOSED", 00:41:55.441 "utilization": 1.0 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 3, 00:41:55.441 "state": "OPEN", 00:41:55.441 "utilization": 0.001953125 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "id": 4, 00:41:55.441 "state": "OPEN", 00:41:55.441 "utilization": 0.0 00:41:55.441 } 00:41:55.441 ], 00:41:55.441 "read-only": true 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "name": "verbose_mode", 00:41:55.441 "value": true, 00:41:55.441 "unit": "", 00:41:55.441 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:41:55.441 }, 00:41:55.441 { 00:41:55.441 "name": "prep_upgrade_on_shutdown", 00:41:55.441 "value": false, 00:41:55.441 "unit": "", 00:41:55.441 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:41:55.441 } 00:41:55.441 ] 00:41:55.441 } 00:41:55.442 07:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:41:55.442 [2024-11-20 07:39:19.600852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:55.442 [2024-11-20 07:39:19.600926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:55.442 [2024-11-20 07:39:19.600944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:41:55.442 [2024-11-20 07:39:19.600956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:55.442 [2024-11-20 07:39:19.600986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:55.442 [2024-11-20 07:39:19.600998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:55.442 [2024-11-20 07:39:19.601010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:55.442 [2024-11-20 07:39:19.601021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:55.442 [2024-11-20 07:39:19.601043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:55.442 [2024-11-20 07:39:19.601055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:55.442 [2024-11-20 07:39:19.601067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:55.442 [2024-11-20 07:39:19.601077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:55.442 [2024-11-20 07:39:19.601145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.300 ms, result 0 00:41:55.442 true 00:41:55.442 07:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:41:55.442 07:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:55.442 07:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:41:56.010 07:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:41:56.010 07:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:41:56.010 07:39:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:41:56.010 [2024-11-20 07:39:20.173398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:56.010 [2024-11-20 07:39:20.173467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:41:56.010 [2024-11-20 07:39:20.173484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:41:56.010 [2024-11-20 07:39:20.173495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:56.010 [2024-11-20 07:39:20.173525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:56.010 [2024-11-20 07:39:20.173537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:41:56.010 [2024-11-20 07:39:20.173548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:41:56.010 [2024-11-20 07:39:20.173559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:56.010 [2024-11-20 07:39:20.173581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:56.010 [2024-11-20 07:39:20.173593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:41:56.010 [2024-11-20 07:39:20.173604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:41:56.010 [2024-11-20 07:39:20.173614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:56.010 [2024-11-20 07:39:20.173682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.273 ms, result 0 00:41:56.010 true 00:41:56.010 07:39:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:41:56.269 { 00:41:56.269 "name": "ftl", 00:41:56.269 "properties": [ 00:41:56.269 { 00:41:56.269 "name": "superblock_version", 00:41:56.269 "value": 5, 00:41:56.269 "read-only": true 00:41:56.269 }, 00:41:56.269 { 00:41:56.269 "name": "base_device", 00:41:56.269 "bands": [ 00:41:56.269 { 00:41:56.269 "id": 0, 00:41:56.269 "state": "FREE", 00:41:56.269 "validity": 0.0 00:41:56.269 }, 00:41:56.269 { 00:41:56.269 "id": 1, 00:41:56.269 "state": "FREE", 00:41:56.269 "validity": 0.0 00:41:56.269 }, 00:41:56.269 { 00:41:56.269 "id": 2, 00:41:56.269 "state": "FREE", 00:41:56.269 "validity": 0.0 00:41:56.269 }, 00:41:56.269 { 00:41:56.269 "id": 3, 00:41:56.269 "state": "FREE", 00:41:56.269 "validity": 0.0 00:41:56.269 }, 00:41:56.269 { 00:41:56.269 "id": 4, 00:41:56.269 "state": "FREE", 00:41:56.269 "validity": 0.0 00:41:56.269 }, 00:41:56.269 { 00:41:56.269 "id": 5, 00:41:56.269 "state": "FREE", 00:41:56.269 "validity": 0.0 00:41:56.269 }, 00:41:56.269 { 00:41:56.269 "id": 6, 00:41:56.269 "state": "FREE", 00:41:56.269 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 7, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 8, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 9, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 10, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 11, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 12, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 13, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 14, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 15, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 16, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 17, 00:41:56.270 "state": "FREE", 00:41:56.270 "validity": 0.0 00:41:56.270 } 00:41:56.270 ], 00:41:56.270 "read-only": true 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "name": "cache_device", 00:41:56.270 "type": "bdev", 00:41:56.270 "chunks": [ 00:41:56.270 { 00:41:56.270 "id": 0, 00:41:56.270 "state": "INACTIVE", 00:41:56.270 "utilization": 0.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 1, 00:41:56.270 "state": "CLOSED", 00:41:56.270 "utilization": 1.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 2, 00:41:56.270 "state": "CLOSED", 00:41:56.270 "utilization": 1.0 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 3, 00:41:56.270 "state": "OPEN", 00:41:56.270 "utilization": 0.001953125 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "id": 4, 00:41:56.270 "state": "OPEN", 00:41:56.270 "utilization": 0.0 00:41:56.270 } 00:41:56.270 ], 00:41:56.270 "read-only": true 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "name": "verbose_mode", 00:41:56.270 "value": true, 00:41:56.270 "unit": "", 00:41:56.270 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:41:56.270 }, 00:41:56.270 { 00:41:56.270 "name": "prep_upgrade_on_shutdown", 00:41:56.270 "value": true, 00:41:56.270 "unit": "", 00:41:56.270 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:41:56.270 } 00:41:56.270 ] 00:41:56.270 } 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81036 ]] 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81036 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81036 ']' 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81036 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81036 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:56.270 killing process with pid 81036 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81036' 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81036 00:41:56.270 07:39:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81036 00:41:57.647 [2024-11-20 07:39:21.665122] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:41:57.647 [2024-11-20 07:39:21.686391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:57.647 [2024-11-20 07:39:21.686467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:41:57.647 [2024-11-20 07:39:21.686484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:41:57.647 [2024-11-20 07:39:21.686496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:57.647 [2024-11-20 07:39:21.686524] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:41:57.647 [2024-11-20 07:39:21.691290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:57.647 [2024-11-20 07:39:21.691342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:41:57.647 [2024-11-20 07:39:21.691358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.744 ms 00:41:57.647 [2024-11-20 07:39:21.691370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.621 [2024-11-20 07:39:30.036568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.621 [2024-11-20 07:39:30.036652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:42:07.622 [2024-11-20 07:39:30.036673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8345.111 ms 00:42:07.622 [2024-11-20 07:39:30.036691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.037909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.037944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:42:07.622 [2024-11-20 07:39:30.037959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.195 ms 00:42:07.622 [2024-11-20 07:39:30.037972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.039151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.039185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:42:07.622 [2024-11-20 07:39:30.039199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.141 ms 00:42:07.622 [2024-11-20 07:39:30.039211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.057580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.057664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:42:07.622 [2024-11-20 07:39:30.057682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.274 ms 00:42:07.622 [2024-11-20 07:39:30.057696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.068376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.068456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:42:07.622 [2024-11-20 07:39:30.068475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.607 ms 00:42:07.622 [2024-11-20 07:39:30.068489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.068621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.068638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:42:07.622 [2024-11-20 07:39:30.068664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:42:07.622 [2024-11-20 07:39:30.068689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.087442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.087513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:42:07.622 [2024-11-20 07:39:30.087531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.726 ms 00:42:07.622 [2024-11-20 07:39:30.087544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.106278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.106360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:42:07.622 [2024-11-20 07:39:30.106378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.674 ms 00:42:07.622 [2024-11-20 07:39:30.106391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.124362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.124432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:42:07.622 [2024-11-20 07:39:30.124448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.911 ms 00:42:07.622 [2024-11-20 07:39:30.124459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.141826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.141896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:42:07.622 [2024-11-20 07:39:30.141913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.249 ms 00:42:07.622 [2024-11-20 07:39:30.141924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.141974] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:42:07.622 [2024-11-20 07:39:30.141996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:42:07.622 [2024-11-20 07:39:30.142011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:42:07.622 [2024-11-20 07:39:30.142044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:42:07.622 [2024-11-20 07:39:30.142057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:07.622 [2024-11-20 07:39:30.142267] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:42:07.622 [2024-11-20 07:39:30.142279] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 70c64441-8edb-41f9-9b4f-ea726476b71f 00:42:07.622 [2024-11-20 07:39:30.142291] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:42:07.622 [2024-11-20 07:39:30.142303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:42:07.622 [2024-11-20 07:39:30.142314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:42:07.622 [2024-11-20 07:39:30.142327] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:42:07.622 [2024-11-20 07:39:30.142339] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:42:07.622 [2024-11-20 07:39:30.142356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:42:07.622 [2024-11-20 07:39:30.142368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:42:07.622 [2024-11-20 07:39:30.142378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:42:07.622 [2024-11-20 07:39:30.142389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:42:07.622 [2024-11-20 07:39:30.142400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.142418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:42:07.622 [2024-11-20 07:39:30.142431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.427 ms 00:42:07.622 [2024-11-20 07:39:30.142442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.166077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.166156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:42:07.622 [2024-11-20 07:39:30.166183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.584 ms 00:42:07.622 [2024-11-20 07:39:30.166207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.166858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:07.622 [2024-11-20 07:39:30.166892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:42:07.622 [2024-11-20 07:39:30.166906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.603 ms 00:42:07.622 [2024-11-20 07:39:30.166918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.242104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.622 [2024-11-20 07:39:30.242180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:42:07.622 [2024-11-20 07:39:30.242203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.622 [2024-11-20 07:39:30.242214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.242272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.622 [2024-11-20 07:39:30.242285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:42:07.622 [2024-11-20 07:39:30.242297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.622 [2024-11-20 07:39:30.242308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.242451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.622 [2024-11-20 07:39:30.242468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:42:07.622 [2024-11-20 07:39:30.242480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.622 [2024-11-20 07:39:30.242491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.242518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.622 [2024-11-20 07:39:30.242531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:42:07.622 [2024-11-20 07:39:30.242543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.622 [2024-11-20 07:39:30.242555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.384836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.622 [2024-11-20 07:39:30.384934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:42:07.622 [2024-11-20 07:39:30.384951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.622 [2024-11-20 07:39:30.384977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.501936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.622 [2024-11-20 07:39:30.502012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:42:07.622 [2024-11-20 07:39:30.502029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.622 [2024-11-20 07:39:30.502041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.622 [2024-11-20 07:39:30.502174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.622 [2024-11-20 07:39:30.502189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:42:07.623 [2024-11-20 07:39:30.502202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.623 [2024-11-20 07:39:30.502214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.623 [2024-11-20 07:39:30.502286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.623 [2024-11-20 07:39:30.502301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:42:07.623 [2024-11-20 07:39:30.502313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.623 [2024-11-20 07:39:30.502324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.623 [2024-11-20 07:39:30.502461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.623 [2024-11-20 07:39:30.502484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:42:07.623 [2024-11-20 07:39:30.502496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.623 [2024-11-20 07:39:30.502508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.623 [2024-11-20 07:39:30.502548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.623 [2024-11-20 07:39:30.502574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:42:07.623 [2024-11-20 07:39:30.502586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.623 [2024-11-20 07:39:30.502597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.623 [2024-11-20 07:39:30.502641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.623 [2024-11-20 07:39:30.502655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:42:07.623 [2024-11-20 07:39:30.502666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.623 [2024-11-20 07:39:30.502677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.623 [2024-11-20 07:39:30.502733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:07.623 [2024-11-20 07:39:30.502751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:42:07.623 [2024-11-20 07:39:30.502763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:07.623 [2024-11-20 07:39:30.502775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:07.623 [2024-11-20 07:39:30.502935] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8816.466 ms, result 0 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81653 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81653 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81653 ']' 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:10.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:10.168 07:39:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:42:10.168 [2024-11-20 07:39:34.041759] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:42:10.168 [2024-11-20 07:39:34.041970] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81653 ] 00:42:10.168 [2024-11-20 07:39:34.228511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.168 [2024-11-20 07:39:34.356713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:11.545 [2024-11-20 07:39:35.396209] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:42:11.545 [2024-11-20 07:39:35.396303] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:42:11.545 [2024-11-20 07:39:35.545849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.545924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:42:11.545 [2024-11-20 07:39:35.545942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:42:11.545 [2024-11-20 07:39:35.545953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.546031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.546045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:42:11.545 [2024-11-20 07:39:35.546057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:42:11.545 [2024-11-20 07:39:35.546069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.546104] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:42:11.545 [2024-11-20 07:39:35.547363] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:42:11.545 [2024-11-20 07:39:35.547409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.547423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:42:11.545 [2024-11-20 07:39:35.547436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.317 ms 00:42:11.545 [2024-11-20 07:39:35.547448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.549158] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:42:11.545 [2024-11-20 07:39:35.571335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.571401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:42:11.545 [2024-11-20 07:39:35.571427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.174 ms 00:42:11.545 [2024-11-20 07:39:35.571440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.571558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.571575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:42:11.545 [2024-11-20 07:39:35.571588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:42:11.545 [2024-11-20 07:39:35.571600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.579514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.579571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:42:11.545 [2024-11-20 07:39:35.579588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.786 ms 00:42:11.545 [2024-11-20 07:39:35.579600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.579696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.579715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:42:11.545 [2024-11-20 07:39:35.579728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:42:11.545 [2024-11-20 07:39:35.579740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.579806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.579835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:42:11.545 [2024-11-20 07:39:35.579853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:42:11.545 [2024-11-20 07:39:35.579865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.579900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:42:11.545 [2024-11-20 07:39:35.585244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.585289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:42:11.545 [2024-11-20 07:39:35.585303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.351 ms 00:42:11.545 [2024-11-20 07:39:35.585336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.585389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.585401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:42:11.545 [2024-11-20 07:39:35.585413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:42:11.545 [2024-11-20 07:39:35.585441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.585525] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:42:11.545 [2024-11-20 07:39:35.585554] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:42:11.545 [2024-11-20 07:39:35.585601] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:42:11.545 [2024-11-20 07:39:35.585622] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:42:11.545 [2024-11-20 07:39:35.585732] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:42:11.545 [2024-11-20 07:39:35.585747] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:42:11.545 [2024-11-20 07:39:35.585763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:42:11.545 [2024-11-20 07:39:35.585779] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:42:11.545 [2024-11-20 07:39:35.585793] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:42:11.545 [2024-11-20 07:39:35.585811] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:42:11.545 [2024-11-20 07:39:35.585823] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:42:11.545 [2024-11-20 07:39:35.585834] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:42:11.545 [2024-11-20 07:39:35.585846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:42:11.545 [2024-11-20 07:39:35.585871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.585884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:42:11.545 [2024-11-20 07:39:35.585897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:42:11.545 [2024-11-20 07:39:35.585908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.586007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.545 [2024-11-20 07:39:35.586023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:42:11.545 [2024-11-20 07:39:35.586035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:42:11.545 [2024-11-20 07:39:35.586052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.545 [2024-11-20 07:39:35.586172] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:42:11.545 [2024-11-20 07:39:35.586188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:42:11.545 [2024-11-20 07:39:35.586201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:42:11.545 [2024-11-20 07:39:35.586215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.545 [2024-11-20 07:39:35.586227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:42:11.545 [2024-11-20 07:39:35.586238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:42:11.545 [2024-11-20 07:39:35.586249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:42:11.545 [2024-11-20 07:39:35.586261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:42:11.545 [2024-11-20 07:39:35.586272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:42:11.545 [2024-11-20 07:39:35.586283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.545 [2024-11-20 07:39:35.586293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:42:11.546 [2024-11-20 07:39:35.586305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:42:11.546 [2024-11-20 07:39:35.586316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:42:11.546 [2024-11-20 07:39:35.586339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:42:11.546 [2024-11-20 07:39:35.586350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:42:11.546 [2024-11-20 07:39:35.586372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:42:11.546 [2024-11-20 07:39:35.586382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:42:11.546 [2024-11-20 07:39:35.586405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:42:11.546 [2024-11-20 07:39:35.586415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:11.546 [2024-11-20 07:39:35.586426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:42:11.546 [2024-11-20 07:39:35.586437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:42:11.546 [2024-11-20 07:39:35.586448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:11.546 [2024-11-20 07:39:35.586473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:42:11.546 [2024-11-20 07:39:35.586484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:42:11.546 [2024-11-20 07:39:35.586495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:11.546 [2024-11-20 07:39:35.586506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:42:11.546 [2024-11-20 07:39:35.586517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:42:11.546 [2024-11-20 07:39:35.586528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:11.546 [2024-11-20 07:39:35.586539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:42:11.546 [2024-11-20 07:39:35.586549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:42:11.546 [2024-11-20 07:39:35.586561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:42:11.546 [2024-11-20 07:39:35.586583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:42:11.546 [2024-11-20 07:39:35.586594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:42:11.546 [2024-11-20 07:39:35.586615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:42:11.546 [2024-11-20 07:39:35.586647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:42:11.546 [2024-11-20 07:39:35.586658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586669] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:42:11.546 [2024-11-20 07:39:35.586682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:42:11.546 [2024-11-20 07:39:35.586694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:42:11.546 [2024-11-20 07:39:35.586705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:11.546 [2024-11-20 07:39:35.586722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:42:11.546 [2024-11-20 07:39:35.586733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:42:11.546 [2024-11-20 07:39:35.586745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:42:11.546 [2024-11-20 07:39:35.586756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:42:11.546 [2024-11-20 07:39:35.586767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:42:11.546 [2024-11-20 07:39:35.586778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:42:11.546 [2024-11-20 07:39:35.586791] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:42:11.546 [2024-11-20 07:39:35.586806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:42:11.546 [2024-11-20 07:39:35.586854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:42:11.546 [2024-11-20 07:39:35.586891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:42:11.546 [2024-11-20 07:39:35.586903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:42:11.546 [2024-11-20 07:39:35.586915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:42:11.546 [2024-11-20 07:39:35.586927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.586999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:42:11.546 [2024-11-20 07:39:35.587011] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:42:11.546 [2024-11-20 07:39:35.587025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.587038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:11.546 [2024-11-20 07:39:35.587050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:42:11.546 [2024-11-20 07:39:35.587062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:42:11.546 [2024-11-20 07:39:35.587074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:42:11.546 [2024-11-20 07:39:35.587089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:11.546 [2024-11-20 07:39:35.587101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:42:11.546 [2024-11-20 07:39:35.587113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.994 ms 00:42:11.546 [2024-11-20 07:39:35.587125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:11.546 [2024-11-20 07:39:35.587183] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:42:11.546 [2024-11-20 07:39:35.587200] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:42:14.081 [2024-11-20 07:39:38.077997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.078099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:42:14.081 [2024-11-20 07:39:38.078119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2490.794 ms 00:42:14.081 [2024-11-20 07:39:38.078144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.121583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.121654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:42:14.081 [2024-11-20 07:39:38.121673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.994 ms 00:42:14.081 [2024-11-20 07:39:38.121686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.121840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.121861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:42:14.081 [2024-11-20 07:39:38.121874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:42:14.081 [2024-11-20 07:39:38.121886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.170808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.170886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:42:14.081 [2024-11-20 07:39:38.170903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.866 ms 00:42:14.081 [2024-11-20 07:39:38.170920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.170993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.171005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:42:14.081 [2024-11-20 07:39:38.171018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:42:14.081 [2024-11-20 07:39:38.171029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.171585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.171614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:42:14.081 [2024-11-20 07:39:38.171628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.455 ms 00:42:14.081 [2024-11-20 07:39:38.171639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.171705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.171718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:42:14.081 [2024-11-20 07:39:38.171730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:42:14.081 [2024-11-20 07:39:38.171741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.194233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.194301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:42:14.081 [2024-11-20 07:39:38.194320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.465 ms 00:42:14.081 [2024-11-20 07:39:38.194332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.215492] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:42:14.081 [2024-11-20 07:39:38.215578] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:42:14.081 [2024-11-20 07:39:38.215598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.215627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:42:14.081 [2024-11-20 07:39:38.215643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.072 ms 00:42:14.081 [2024-11-20 07:39:38.215654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.238401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.238492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:42:14.081 [2024-11-20 07:39:38.238509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.646 ms 00:42:14.081 [2024-11-20 07:39:38.238539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.260331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.260408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:42:14.081 [2024-11-20 07:39:38.260425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.687 ms 00:42:14.081 [2024-11-20 07:39:38.260437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.081 [2024-11-20 07:39:38.282248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.081 [2024-11-20 07:39:38.282324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:42:14.081 [2024-11-20 07:39:38.282341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.712 ms 00:42:14.081 [2024-11-20 07:39:38.282353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.340 [2024-11-20 07:39:38.283393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.340 [2024-11-20 07:39:38.283439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:42:14.340 [2024-11-20 07:39:38.283454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.818 ms 00:42:14.340 [2024-11-20 07:39:38.283465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.340 [2024-11-20 07:39:38.395997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.340 [2024-11-20 07:39:38.396108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:42:14.340 [2024-11-20 07:39:38.396127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 112.498 ms 00:42:14.340 [2024-11-20 07:39:38.396140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.340 [2024-11-20 07:39:38.410809] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:42:14.340 [2024-11-20 07:39:38.412026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.340 [2024-11-20 07:39:38.412060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:42:14.340 [2024-11-20 07:39:38.412078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.780 ms 00:42:14.340 [2024-11-20 07:39:38.412090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.340 [2024-11-20 07:39:38.412264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.340 [2024-11-20 07:39:38.412283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:42:14.340 [2024-11-20 07:39:38.412296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:42:14.340 [2024-11-20 07:39:38.412307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.340 [2024-11-20 07:39:38.412383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.341 [2024-11-20 07:39:38.412396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:42:14.341 [2024-11-20 07:39:38.412408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:42:14.341 [2024-11-20 07:39:38.412419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.341 [2024-11-20 07:39:38.412448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.341 [2024-11-20 07:39:38.412460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:42:14.341 [2024-11-20 07:39:38.412471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:42:14.341 [2024-11-20 07:39:38.412487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.341 [2024-11-20 07:39:38.412522] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:42:14.341 [2024-11-20 07:39:38.412535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.341 [2024-11-20 07:39:38.412546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:42:14.341 [2024-11-20 07:39:38.412558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:42:14.341 [2024-11-20 07:39:38.412569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.341 [2024-11-20 07:39:38.455442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.341 [2024-11-20 07:39:38.455544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:42:14.341 [2024-11-20 07:39:38.455563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.840 ms 00:42:14.341 [2024-11-20 07:39:38.455575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.341 [2024-11-20 07:39:38.455711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.341 [2024-11-20 07:39:38.455726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:42:14.341 [2024-11-20 07:39:38.455739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:42:14.341 [2024-11-20 07:39:38.455750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.341 [2024-11-20 07:39:38.457075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2910.660 ms, result 0 00:42:14.341 [2024-11-20 07:39:38.471978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:14.341 [2024-11-20 07:39:38.488109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:42:14.341 [2024-11-20 07:39:38.498333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:14.600 07:39:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:14.600 07:39:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:42:14.600 07:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:42:14.600 07:39:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:42:14.600 07:39:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:42:14.600 [2024-11-20 07:39:38.746342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.600 [2024-11-20 07:39:38.746412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:42:14.600 [2024-11-20 07:39:38.746431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:42:14.600 [2024-11-20 07:39:38.746465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.600 [2024-11-20 07:39:38.746499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.600 [2024-11-20 07:39:38.746513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:42:14.600 [2024-11-20 07:39:38.746526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:42:14.600 [2024-11-20 07:39:38.746538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.600 [2024-11-20 07:39:38.746563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:14.600 [2024-11-20 07:39:38.746576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:42:14.600 [2024-11-20 07:39:38.746588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:42:14.600 [2024-11-20 07:39:38.746600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:14.600 [2024-11-20 07:39:38.746682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.319 ms, result 0 00:42:14.600 true 00:42:14.600 07:39:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:42:14.859 { 00:42:14.859 "name": "ftl", 00:42:14.859 "properties": [ 00:42:14.859 { 00:42:14.859 "name": "superblock_version", 00:42:14.859 "value": 5, 00:42:14.859 "read-only": true 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "name": "base_device", 00:42:14.859 "bands": [ 00:42:14.859 { 00:42:14.859 "id": 0, 00:42:14.859 "state": "CLOSED", 00:42:14.859 "validity": 1.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 1, 00:42:14.859 "state": "CLOSED", 00:42:14.859 "validity": 1.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 2, 00:42:14.859 "state": "CLOSED", 00:42:14.859 "validity": 0.007843137254901933 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 3, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 4, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 5, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 6, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 7, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 8, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 9, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 10, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 11, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 12, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 13, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 14, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 15, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 16, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 17, 00:42:14.859 "state": "FREE", 00:42:14.859 "validity": 0.0 00:42:14.859 } 00:42:14.859 ], 00:42:14.859 "read-only": true 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "name": "cache_device", 00:42:14.859 "type": "bdev", 00:42:14.859 "chunks": [ 00:42:14.859 { 00:42:14.859 "id": 0, 00:42:14.859 "state": "INACTIVE", 00:42:14.859 "utilization": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 1, 00:42:14.859 "state": "OPEN", 00:42:14.859 "utilization": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 2, 00:42:14.859 "state": "OPEN", 00:42:14.859 "utilization": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 3, 00:42:14.859 "state": "FREE", 00:42:14.859 "utilization": 0.0 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "id": 4, 00:42:14.859 "state": "FREE", 00:42:14.859 "utilization": 0.0 00:42:14.859 } 00:42:14.859 ], 00:42:14.859 "read-only": true 00:42:14.859 }, 00:42:14.859 { 00:42:14.859 "name": "verbose_mode", 00:42:14.859 "value": true, 00:42:14.859 "unit": "", 00:42:14.859 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:42:14.859 }, 00:42:14.860 { 00:42:14.860 "name": "prep_upgrade_on_shutdown", 00:42:14.860 "value": false, 00:42:14.860 "unit": "", 00:42:14.860 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:42:14.860 } 00:42:14.860 ] 00:42:14.860 } 00:42:14.860 07:39:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:42:14.860 07:39:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:42:14.860 07:39:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:42:15.118 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:42:15.118 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:42:15.118 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:42:15.118 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:42:15.118 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:42:15.376 Validate MD5 checksum, iteration 1 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:42:15.376 07:39:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:42:15.640 [2024-11-20 07:39:39.605135] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:42:15.640 [2024-11-20 07:39:39.605291] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81723 ] 00:42:15.640 [2024-11-20 07:39:39.785258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.924 [2024-11-20 07:39:39.912149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:17.828  [2024-11-20T07:39:42.599Z] Copying: 573/1024 [MB] (573 MBps) [2024-11-20T07:39:44.502Z] Copying: 1024/1024 [MB] (average 579 MBps) 00:42:20.299 00:42:20.299 07:39:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:42:20.299 07:39:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:22.202 Validate MD5 checksum, iteration 2 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8cd062133a36396af508c316b29b8642 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8cd062133a36396af508c316b29b8642 != \8\c\d\0\6\2\1\3\3\a\3\6\3\9\6\a\f\5\0\8\c\3\1\6\b\2\9\b\8\6\4\2 ]] 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:42:22.202 07:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:42:22.202 [2024-11-20 07:39:46.157939] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:42:22.202 [2024-11-20 07:39:46.158131] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81796 ] 00:42:22.202 [2024-11-20 07:39:46.361504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:22.461 [2024-11-20 07:39:46.521550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:24.365  [2024-11-20T07:39:49.146Z] Copying: 534/1024 [MB] (534 MBps) [2024-11-20T07:39:52.445Z] Copying: 1024/1024 [MB] (average 563 MBps) 00:42:28.243 00:42:28.243 07:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:42:28.243 07:39:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=06494b8b8f9136c6dbc408cbd81a301c 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 06494b8b8f9136c6dbc408cbd81a301c != \0\6\4\9\4\b\8\b\8\f\9\1\3\6\c\6\d\b\c\4\0\8\c\b\d\8\1\a\3\0\1\c ]] 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81653 ]] 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81653 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:42:29.618 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81874 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81874 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81874 ']' 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:29.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:29.619 07:39:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:42:29.877 [2024-11-20 07:39:53.871150] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:42:29.877 [2024-11-20 07:39:53.871301] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81874 ] 00:42:29.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 81653 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:42:29.877 [2024-11-20 07:39:54.045130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.137 [2024-11-20 07:39:54.171108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:31.073 [2024-11-20 07:39:55.222570] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:42:31.073 [2024-11-20 07:39:55.222650] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:42:31.333 [2024-11-20 07:39:55.372046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.372110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:42:31.333 [2024-11-20 07:39:55.372128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:42:31.333 [2024-11-20 07:39:55.372139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.372210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.372225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:42:31.333 [2024-11-20 07:39:55.372237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:42:31.333 [2024-11-20 07:39:55.372248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.372283] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:42:31.333 [2024-11-20 07:39:55.373511] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:42:31.333 [2024-11-20 07:39:55.373543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.373556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:42:31.333 [2024-11-20 07:39:55.373570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.273 ms 00:42:31.333 [2024-11-20 07:39:55.373581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.374050] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:42:31.333 [2024-11-20 07:39:55.402042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.402117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:42:31.333 [2024-11-20 07:39:55.402136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.987 ms 00:42:31.333 [2024-11-20 07:39:55.402155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.419888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.419956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:42:31.333 [2024-11-20 07:39:55.419977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:42:31.333 [2024-11-20 07:39:55.419989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.420637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.420660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:42:31.333 [2024-11-20 07:39:55.420674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.513 ms 00:42:31.333 [2024-11-20 07:39:55.420686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.420759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.420779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:42:31.333 [2024-11-20 07:39:55.420791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:42:31.333 [2024-11-20 07:39:55.420803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.420859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.420874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:42:31.333 [2024-11-20 07:39:55.420886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:42:31.333 [2024-11-20 07:39:55.420898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.420930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:42:31.333 [2024-11-20 07:39:55.426530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.426578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:42:31.333 [2024-11-20 07:39:55.426594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.606 ms 00:42:31.333 [2024-11-20 07:39:55.426606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.426657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.426670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:42:31.333 [2024-11-20 07:39:55.426682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:42:31.333 [2024-11-20 07:39:55.426694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.426754] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:42:31.333 [2024-11-20 07:39:55.426782] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:42:31.333 [2024-11-20 07:39:55.426843] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:42:31.333 [2024-11-20 07:39:55.426870] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:42:31.333 [2024-11-20 07:39:55.426977] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:42:31.333 [2024-11-20 07:39:55.426993] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:42:31.333 [2024-11-20 07:39:55.427009] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:42:31.333 [2024-11-20 07:39:55.427024] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:42:31.333 [2024-11-20 07:39:55.427037] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:42:31.333 [2024-11-20 07:39:55.427050] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:42:31.333 [2024-11-20 07:39:55.427061] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:42:31.333 [2024-11-20 07:39:55.427074] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:42:31.333 [2024-11-20 07:39:55.427085] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:42:31.333 [2024-11-20 07:39:55.427098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.427113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:42:31.333 [2024-11-20 07:39:55.427125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:42:31.333 [2024-11-20 07:39:55.427137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.427229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.333 [2024-11-20 07:39:55.427246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:42:31.333 [2024-11-20 07:39:55.427259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:42:31.333 [2024-11-20 07:39:55.427270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.333 [2024-11-20 07:39:55.427391] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:42:31.333 [2024-11-20 07:39:55.427405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:42:31.333 [2024-11-20 07:39:55.427420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:42:31.333 [2024-11-20 07:39:55.427432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.333 [2024-11-20 07:39:55.427444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:42:31.333 [2024-11-20 07:39:55.427455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:42:31.333 [2024-11-20 07:39:55.427465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:42:31.333 [2024-11-20 07:39:55.427476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:42:31.333 [2024-11-20 07:39:55.427486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:42:31.333 [2024-11-20 07:39:55.427496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.333 [2024-11-20 07:39:55.427506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:42:31.333 [2024-11-20 07:39:55.427516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:42:31.333 [2024-11-20 07:39:55.427527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.333 [2024-11-20 07:39:55.427537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:42:31.333 [2024-11-20 07:39:55.427547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:42:31.333 [2024-11-20 07:39:55.427557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.333 [2024-11-20 07:39:55.427567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:42:31.333 [2024-11-20 07:39:55.427577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:42:31.333 [2024-11-20 07:39:55.427604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.333 [2024-11-20 07:39:55.427615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:42:31.333 [2024-11-20 07:39:55.427626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:42:31.333 [2024-11-20 07:39:55.427637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:31.333 [2024-11-20 07:39:55.427649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:42:31.333 [2024-11-20 07:39:55.427675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:42:31.333 [2024-11-20 07:39:55.427686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:31.333 [2024-11-20 07:39:55.427697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:42:31.333 [2024-11-20 07:39:55.427707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:42:31.333 [2024-11-20 07:39:55.427718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:31.333 [2024-11-20 07:39:55.427729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:42:31.333 [2024-11-20 07:39:55.427740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:42:31.333 [2024-11-20 07:39:55.427751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:42:31.333 [2024-11-20 07:39:55.427762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:42:31.333 [2024-11-20 07:39:55.427772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:42:31.334 [2024-11-20 07:39:55.427783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.334 [2024-11-20 07:39:55.427794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:42:31.334 [2024-11-20 07:39:55.427804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:42:31.334 [2024-11-20 07:39:55.427815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.334 [2024-11-20 07:39:55.427826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:42:31.334 [2024-11-20 07:39:55.427836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:42:31.334 [2024-11-20 07:39:55.427859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.334 [2024-11-20 07:39:55.427870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:42:31.334 [2024-11-20 07:39:55.427881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:42:31.334 [2024-11-20 07:39:55.427891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.334 [2024-11-20 07:39:55.427902] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:42:31.334 [2024-11-20 07:39:55.427913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:42:31.334 [2024-11-20 07:39:55.427925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:42:31.334 [2024-11-20 07:39:55.427936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:42:31.334 [2024-11-20 07:39:55.427948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:42:31.334 [2024-11-20 07:39:55.427959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:42:31.334 [2024-11-20 07:39:55.427970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:42:31.334 [2024-11-20 07:39:55.427981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:42:31.334 [2024-11-20 07:39:55.427992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:42:31.334 [2024-11-20 07:39:55.428003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:42:31.334 [2024-11-20 07:39:55.428016] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:42:31.334 [2024-11-20 07:39:55.428030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:42:31.334 [2024-11-20 07:39:55.428057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:42:31.334 [2024-11-20 07:39:55.428093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:42:31.334 [2024-11-20 07:39:55.428105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:42:31.334 [2024-11-20 07:39:55.428117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:42:31.334 [2024-11-20 07:39:55.428129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:42:31.334 [2024-11-20 07:39:55.428213] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:42:31.334 [2024-11-20 07:39:55.428227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:31.334 [2024-11-20 07:39:55.428252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:42:31.334 [2024-11-20 07:39:55.428264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:42:31.334 [2024-11-20 07:39:55.428275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:42:31.334 [2024-11-20 07:39:55.428288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.334 [2024-11-20 07:39:55.428305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:42:31.334 [2024-11-20 07:39:55.428317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.973 ms 00:42:31.334 [2024-11-20 07:39:55.428329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.334 [2024-11-20 07:39:55.470627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.334 [2024-11-20 07:39:55.470685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:42:31.334 [2024-11-20 07:39:55.470702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.231 ms 00:42:31.334 [2024-11-20 07:39:55.470715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.334 [2024-11-20 07:39:55.470782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.334 [2024-11-20 07:39:55.470795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:42:31.334 [2024-11-20 07:39:55.470807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:42:31.334 [2024-11-20 07:39:55.470837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.334 [2024-11-20 07:39:55.524105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.334 [2024-11-20 07:39:55.524162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:42:31.334 [2024-11-20 07:39:55.524195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.154 ms 00:42:31.334 [2024-11-20 07:39:55.524208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.334 [2024-11-20 07:39:55.524283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.334 [2024-11-20 07:39:55.524297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:42:31.334 [2024-11-20 07:39:55.524310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:42:31.334 [2024-11-20 07:39:55.524326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.334 [2024-11-20 07:39:55.524481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.334 [2024-11-20 07:39:55.524497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:42:31.334 [2024-11-20 07:39:55.524510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:42:31.334 [2024-11-20 07:39:55.524521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.334 [2024-11-20 07:39:55.524569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.334 [2024-11-20 07:39:55.524583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:42:31.334 [2024-11-20 07:39:55.524595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:42:31.334 [2024-11-20 07:39:55.524617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.547209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.547272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:42:31.594 [2024-11-20 07:39:55.547290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.559 ms 00:42:31.594 [2024-11-20 07:39:55.547302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.547511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.547535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:42:31.594 [2024-11-20 07:39:55.547549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:42:31.594 [2024-11-20 07:39:55.547561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.587753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.587838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:42:31.594 [2024-11-20 07:39:55.587857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.155 ms 00:42:31.594 [2024-11-20 07:39:55.587870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.605850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.605916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:42:31.594 [2024-11-20 07:39:55.605941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.775 ms 00:42:31.594 [2024-11-20 07:39:55.605952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.705703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.705778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:42:31.594 [2024-11-20 07:39:55.705806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.613 ms 00:42:31.594 [2024-11-20 07:39:55.705829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.706056] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:42:31.594 [2024-11-20 07:39:55.706221] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:42:31.594 [2024-11-20 07:39:55.706355] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:42:31.594 [2024-11-20 07:39:55.706500] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:42:31.594 [2024-11-20 07:39:55.706516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.706528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:42:31.594 [2024-11-20 07:39:55.706542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.591 ms 00:42:31.594 [2024-11-20 07:39:55.706554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.706685] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:42:31.594 [2024-11-20 07:39:55.706704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.706723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:42:31.594 [2024-11-20 07:39:55.706736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:42:31.594 [2024-11-20 07:39:55.706748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.735703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.735787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:42:31.594 [2024-11-20 07:39:55.735805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.920 ms 00:42:31.594 [2024-11-20 07:39:55.735828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.752224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.752293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:42:31.594 [2024-11-20 07:39:55.752311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:42:31.594 [2024-11-20 07:39:55.752322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:31.594 [2024-11-20 07:39:55.752464] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:42:31.594 [2024-11-20 07:39:55.752667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:31.594 [2024-11-20 07:39:55.752685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:42:31.594 [2024-11-20 07:39:55.752697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:42:31.594 [2024-11-20 07:39:55.752708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.162 [2024-11-20 07:39:56.259327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.162 [2024-11-20 07:39:56.259407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:42:32.162 [2024-11-20 07:39:56.259427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 504.923 ms 00:42:32.162 [2024-11-20 07:39:56.259441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.162 [2024-11-20 07:39:56.266221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.162 [2024-11-20 07:39:56.266280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:42:32.162 [2024-11-20 07:39:56.266297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.434 ms 00:42:32.162 [2024-11-20 07:39:56.266311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.162 [2024-11-20 07:39:56.266797] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:42:32.162 [2024-11-20 07:39:56.266839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.162 [2024-11-20 07:39:56.266851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:42:32.162 [2024-11-20 07:39:56.266864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.497 ms 00:42:32.162 [2024-11-20 07:39:56.266877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.162 [2024-11-20 07:39:56.266933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.162 [2024-11-20 07:39:56.266948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:42:32.162 [2024-11-20 07:39:56.266962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:42:32.162 [2024-11-20 07:39:56.266973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.162 [2024-11-20 07:39:56.267025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 514.557 ms, result 0 00:42:32.162 [2024-11-20 07:39:56.267078] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:42:32.162 [2024-11-20 07:39:56.267172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.162 [2024-11-20 07:39:56.267184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:42:32.162 [2024-11-20 07:39:56.267196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:42:32.162 [2024-11-20 07:39:56.267207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.729 [2024-11-20 07:39:56.762714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.729 [2024-11-20 07:39:56.763047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:42:32.729 [2024-11-20 07:39:56.763078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 493.928 ms 00:42:32.730 [2024-11-20 07:39:56.763091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.769851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.770116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:42:32.730 [2024-11-20 07:39:56.770152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.318 ms 00:42:32.730 [2024-11-20 07:39:56.770165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.770718] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:42:32.730 [2024-11-20 07:39:56.770747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.770760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:42:32.730 [2024-11-20 07:39:56.770773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.506 ms 00:42:32.730 [2024-11-20 07:39:56.770784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.770843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.770858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:42:32.730 [2024-11-20 07:39:56.770871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:42:32.730 [2024-11-20 07:39:56.770883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.770933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 503.846 ms, result 0 00:42:32.730 [2024-11-20 07:39:56.770983] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:42:32.730 [2024-11-20 07:39:56.770999] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:42:32.730 [2024-11-20 07:39:56.771014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.771027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:42:32.730 [2024-11-20 07:39:56.771040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1018.566 ms 00:42:32.730 [2024-11-20 07:39:56.771052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.771091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.771104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:42:32.730 [2024-11-20 07:39:56.771122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:42:32.730 [2024-11-20 07:39:56.771134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.786197] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:42:32.730 [2024-11-20 07:39:56.786447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.786463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:42:32.730 [2024-11-20 07:39:56.786479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.290 ms 00:42:32.730 [2024-11-20 07:39:56.786490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.787262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.787289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:42:32.730 [2024-11-20 07:39:56.787308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:42:32.730 [2024-11-20 07:39:56.787320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.789719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.789951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:42:32.730 [2024-11-20 07:39:56.789980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.373 ms 00:42:32.730 [2024-11-20 07:39:56.789993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.790081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.790095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:42:32.730 [2024-11-20 07:39:56.790107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:42:32.730 [2024-11-20 07:39:56.790127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.790299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.790315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:42:32.730 [2024-11-20 07:39:56.790327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:42:32.730 [2024-11-20 07:39:56.790339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.790369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.790382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:42:32.730 [2024-11-20 07:39:56.790394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:42:32.730 [2024-11-20 07:39:56.790406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.790443] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:42:32.730 [2024-11-20 07:39:56.790460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.790473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:42:32.730 [2024-11-20 07:39:56.790485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:42:32.730 [2024-11-20 07:39:56.790496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.790564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:32.730 [2024-11-20 07:39:56.790578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:42:32.730 [2024-11-20 07:39:56.790591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:42:32.730 [2024-11-20 07:39:56.790603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:32.730 [2024-11-20 07:39:56.791896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1419.275 ms, result 0 00:42:32.730 [2024-11-20 07:39:56.807183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:32.730 [2024-11-20 07:39:56.823227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:42:32.730 [2024-11-20 07:39:56.833527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:32.730 Validate MD5 checksum, iteration 1 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:42:32.730 07:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:42:33.040 [2024-11-20 07:39:56.995437] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:42:33.040 [2024-11-20 07:39:56.995906] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81909 ] 00:42:33.040 [2024-11-20 07:39:57.199771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:33.300 [2024-11-20 07:39:57.372347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:35.200  [2024-11-20T07:39:59.970Z] Copying: 597/1024 [MB] (597 MBps) [2024-11-20T07:40:01.869Z] Copying: 1024/1024 [MB] (average 548 MBps) 00:42:37.666 00:42:37.666 07:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:42:37.666 07:40:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:39.569 Validate MD5 checksum, iteration 2 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8cd062133a36396af508c316b29b8642 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8cd062133a36396af508c316b29b8642 != \8\c\d\0\6\2\1\3\3\a\3\6\3\9\6\a\f\5\0\8\c\3\1\6\b\2\9\b\8\6\4\2 ]] 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:42:39.569 07:40:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:42:39.826 [2024-11-20 07:40:03.815901] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:42:39.826 [2024-11-20 07:40:03.816052] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81986 ] 00:42:40.084 [2024-11-20 07:40:04.047592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:40.084 [2024-11-20 07:40:04.183037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:41.989  [2024-11-20T07:40:07.136Z] Copying: 493/1024 [MB] (493 MBps) [2024-11-20T07:40:07.136Z] Copying: 993/1024 [MB] (500 MBps) [2024-11-20T07:40:08.511Z] Copying: 1024/1024 [MB] (average 496 MBps) 00:42:44.308 00:42:44.308 07:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:42:44.308 07:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=06494b8b8f9136c6dbc408cbd81a301c 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 06494b8b8f9136c6dbc408cbd81a301c != \0\6\4\9\4\b\8\b\8\f\9\1\3\6\c\6\d\b\c\4\0\8\c\b\d\8\1\a\3\0\1\c ]] 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81874 ]] 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81874 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81874 ']' 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81874 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81874 00:42:46.846 killing process with pid 81874 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81874' 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81874 00:42:46.846 07:40:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81874 00:42:48.224 [2024-11-20 07:40:12.253491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:42:48.224 [2024-11-20 07:40:12.276463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.276545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:42:48.224 [2024-11-20 07:40:12.276567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:42:48.224 [2024-11-20 07:40:12.276581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.276614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:42:48.224 [2024-11-20 07:40:12.282491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.282535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:42:48.224 [2024-11-20 07:40:12.282553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.854 ms 00:42:48.224 [2024-11-20 07:40:12.282574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.282879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.282897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:42:48.224 [2024-11-20 07:40:12.282912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:42:48.224 [2024-11-20 07:40:12.282925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.284430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.284472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:42:48.224 [2024-11-20 07:40:12.284488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.481 ms 00:42:48.224 [2024-11-20 07:40:12.284501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.285690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.285733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:42:48.224 [2024-11-20 07:40:12.285749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.141 ms 00:42:48.224 [2024-11-20 07:40:12.285761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.305475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.305546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:42:48.224 [2024-11-20 07:40:12.305566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.664 ms 00:42:48.224 [2024-11-20 07:40:12.305595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.316144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.316205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:42:48.224 [2024-11-20 07:40:12.316224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.495 ms 00:42:48.224 [2024-11-20 07:40:12.316238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.316353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.316370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:42:48.224 [2024-11-20 07:40:12.316384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:42:48.224 [2024-11-20 07:40:12.316398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.335433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.335766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:42:48.224 [2024-11-20 07:40:12.335797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.998 ms 00:42:48.224 [2024-11-20 07:40:12.335811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.354677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.354961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:42:48.224 [2024-11-20 07:40:12.354988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.775 ms 00:42:48.224 [2024-11-20 07:40:12.355002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.373469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.373525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:42:48.224 [2024-11-20 07:40:12.373542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.407 ms 00:42:48.224 [2024-11-20 07:40:12.373554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.224 [2024-11-20 07:40:12.391842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.224 [2024-11-20 07:40:12.392122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:42:48.225 [2024-11-20 07:40:12.392149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.163 ms 00:42:48.225 [2024-11-20 07:40:12.392161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.225 [2024-11-20 07:40:12.392225] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:42:48.225 [2024-11-20 07:40:12.392249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:42:48.225 [2024-11-20 07:40:12.392266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:42:48.225 [2024-11-20 07:40:12.392280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:42:48.225 [2024-11-20 07:40:12.392294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:48.225 [2024-11-20 07:40:12.392484] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:42:48.225 [2024-11-20 07:40:12.392497] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 70c64441-8edb-41f9-9b4f-ea726476b71f 00:42:48.225 [2024-11-20 07:40:12.392509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:42:48.225 [2024-11-20 07:40:12.392522] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:42:48.225 [2024-11-20 07:40:12.392533] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:42:48.225 [2024-11-20 07:40:12.392546] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:42:48.225 [2024-11-20 07:40:12.392558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:42:48.225 [2024-11-20 07:40:12.392571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:42:48.225 [2024-11-20 07:40:12.392583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:42:48.225 [2024-11-20 07:40:12.392594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:42:48.225 [2024-11-20 07:40:12.392604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:42:48.225 [2024-11-20 07:40:12.392617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.225 [2024-11-20 07:40:12.392649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:42:48.225 [2024-11-20 07:40:12.392662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:42:48.225 [2024-11-20 07:40:12.392674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.225 [2024-11-20 07:40:12.418355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.225 [2024-11-20 07:40:12.418566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:42:48.225 [2024-11-20 07:40:12.418691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.637 ms 00:42:48.225 [2024-11-20 07:40:12.418752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.225 [2024-11-20 07:40:12.419609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:42:48.225 [2024-11-20 07:40:12.419737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:42:48.225 [2024-11-20 07:40:12.419837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.761 ms 00:42:48.225 [2024-11-20 07:40:12.419913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.484 [2024-11-20 07:40:12.503888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.484 [2024-11-20 07:40:12.504159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:42:48.484 [2024-11-20 07:40:12.504325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.484 [2024-11-20 07:40:12.504438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.484 [2024-11-20 07:40:12.504556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.485 [2024-11-20 07:40:12.504750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:42:48.485 [2024-11-20 07:40:12.504790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.485 [2024-11-20 07:40:12.504846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.485 [2024-11-20 07:40:12.505044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.485 [2024-11-20 07:40:12.505091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:42:48.485 [2024-11-20 07:40:12.505260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.485 [2024-11-20 07:40:12.505314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.485 [2024-11-20 07:40:12.505403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.485 [2024-11-20 07:40:12.505662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:42:48.485 [2024-11-20 07:40:12.505756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.485 [2024-11-20 07:40:12.505872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.485 [2024-11-20 07:40:12.667031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.485 [2024-11-20 07:40:12.667376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:42:48.485 [2024-11-20 07:40:12.667484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.485 [2024-11-20 07:40:12.667525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.744 [2024-11-20 07:40:12.793902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.744 [2024-11-20 07:40:12.794282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:42:48.744 [2024-11-20 07:40:12.794385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.744 [2024-11-20 07:40:12.794429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.744 [2024-11-20 07:40:12.794617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.744 [2024-11-20 07:40:12.794661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:42:48.744 [2024-11-20 07:40:12.794778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.744 [2024-11-20 07:40:12.794872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.744 [2024-11-20 07:40:12.795004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.744 [2024-11-20 07:40:12.795124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:42:48.744 [2024-11-20 07:40:12.795175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.744 [2024-11-20 07:40:12.795226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.744 [2024-11-20 07:40:12.795535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.744 [2024-11-20 07:40:12.795625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:42:48.744 [2024-11-20 07:40:12.795702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.744 [2024-11-20 07:40:12.795740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.744 [2024-11-20 07:40:12.795902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.744 [2024-11-20 07:40:12.796020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:42:48.744 [2024-11-20 07:40:12.796039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.744 [2024-11-20 07:40:12.796076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.744 [2024-11-20 07:40:12.796135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.744 [2024-11-20 07:40:12.796149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:42:48.744 [2024-11-20 07:40:12.796163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.744 [2024-11-20 07:40:12.796175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.744 [2024-11-20 07:40:12.796247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:42:48.744 [2024-11-20 07:40:12.796261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:42:48.744 [2024-11-20 07:40:12.796279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:42:48.744 [2024-11-20 07:40:12.796291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:42:48.744 [2024-11-20 07:40:12.796450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 519.941 ms, result 0 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:42:50.649 Remove shared memory files 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81653 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:42:50.649 ************************************ 00:42:50.649 END TEST ftl_upgrade_shutdown 00:42:50.649 ************************************ 00:42:50.649 00:42:50.649 real 1m35.946s 00:42:50.649 user 2m14.283s 00:42:50.649 sys 0m26.064s 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:50.649 07:40:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:42:50.649 07:40:14 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:42:50.649 07:40:14 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:42:50.649 07:40:14 ftl -- ftl/ftl.sh@14 -- # killprocess 74799 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@954 -- # '[' -z 74799 ']' 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@958 -- # kill -0 74799 00:42:50.649 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74799) - No such process 00:42:50.649 Process with pid 74799 is not found 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74799 is not found' 00:42:50.649 07:40:14 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:42:50.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:50.649 07:40:14 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82123 00:42:50.649 07:40:14 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82123 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@835 -- # '[' -z 82123 ']' 00:42:50.649 07:40:14 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:50.649 07:40:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:50.649 [2024-11-20 07:40:14.616895] Starting SPDK v25.01-pre git sha1 400f484f7 / DPDK 24.03.0 initialization... 00:42:50.649 [2024-11-20 07:40:14.617360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82123 ] 00:42:50.649 [2024-11-20 07:40:14.814953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:50.908 [2024-11-20 07:40:14.979790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.282 07:40:16 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:52.282 07:40:16 ftl -- common/autotest_common.sh@868 -- # return 0 00:42:52.282 07:40:16 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:52.540 nvme0n1 00:42:52.540 07:40:16 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:42:52.540 07:40:16 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:52.540 07:40:16 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:52.799 07:40:16 ftl -- ftl/common.sh@28 -- # stores=0edf62e4-1f8e-407d-93f5-bc206b4638a0 00:42:52.799 07:40:16 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:42:52.799 07:40:16 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0edf62e4-1f8e-407d-93f5-bc206b4638a0 00:42:53.057 07:40:17 ftl -- ftl/ftl.sh@23 -- # killprocess 82123 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@954 -- # '[' -z 82123 ']' 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@958 -- # kill -0 82123 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@959 -- # uname 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82123 00:42:53.057 killing process with pid 82123 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82123' 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@973 -- # kill 82123 00:42:53.057 07:40:17 ftl -- common/autotest_common.sh@978 -- # wait 82123 00:42:56.349 07:40:20 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:42:56.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:42:56.349 Waiting for block devices as requested 00:42:56.349 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:42:56.609 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:42:56.609 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:42:56.868 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:43:02.159 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:43:02.159 07:40:25 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:43:02.159 07:40:25 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:43:02.159 Remove shared memory files 00:43:02.159 07:40:25 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:43:02.159 07:40:25 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:43:02.159 07:40:25 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:43:02.159 07:40:25 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:43:02.159 07:40:25 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:43:02.159 ************************************ 00:43:02.159 END TEST ftl 00:43:02.159 ************************************ 00:43:02.159 00:43:02.159 real 10m51.608s 00:43:02.159 user 13m35.773s 00:43:02.159 sys 1m36.751s 00:43:02.159 07:40:25 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:02.159 07:40:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:43:02.159 07:40:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:02.159 07:40:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:02.159 07:40:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:02.159 07:40:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:02.159 07:40:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:02.159 07:40:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:02.159 07:40:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:02.159 07:40:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:02.159 07:40:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:02.159 07:40:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:02.159 07:40:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:02.159 07:40:26 -- common/autotest_common.sh@10 -- # set +x 00:43:02.159 07:40:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:02.159 07:40:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:02.159 07:40:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:02.159 07:40:26 -- common/autotest_common.sh@10 -- # set +x 00:43:04.063 INFO: APP EXITING 00:43:04.063 INFO: killing all VMs 00:43:04.063 INFO: killing vhost app 00:43:04.063 INFO: EXIT DONE 00:43:04.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:43:04.891 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:43:04.891 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:43:04.891 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:43:04.891 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:43:05.458 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:43:05.717 Cleaning 00:43:05.717 Removing: /var/run/dpdk/spdk0/config 00:43:05.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:05.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:05.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:05.717 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:05.717 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:05.717 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:05.976 Removing: /var/run/dpdk/spdk0 00:43:05.976 Removing: /var/run/dpdk/spdk_pid57988 00:43:05.976 Removing: /var/run/dpdk/spdk_pid58245 00:43:05.976 Removing: /var/run/dpdk/spdk_pid58485 00:43:05.976 Removing: /var/run/dpdk/spdk_pid58589 00:43:05.976 Removing: /var/run/dpdk/spdk_pid58645 00:43:05.976 Removing: /var/run/dpdk/spdk_pid58784 00:43:05.976 Removing: /var/run/dpdk/spdk_pid58813 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59023 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59147 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59261 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59389 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59502 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59544 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59587 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59663 00:43:05.976 Removing: /var/run/dpdk/spdk_pid59769 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60257 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60337 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60417 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60444 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60614 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60635 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60800 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60822 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60897 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60920 00:43:05.976 Removing: /var/run/dpdk/spdk_pid60990 00:43:05.976 Removing: /var/run/dpdk/spdk_pid61019 00:43:05.976 Removing: /var/run/dpdk/spdk_pid61225 00:43:05.976 Removing: /var/run/dpdk/spdk_pid61267 00:43:05.977 Removing: /var/run/dpdk/spdk_pid61351 00:43:05.977 Removing: /var/run/dpdk/spdk_pid61556 00:43:05.977 Removing: /var/run/dpdk/spdk_pid61662 00:43:05.977 Removing: /var/run/dpdk/spdk_pid61704 00:43:05.977 Removing: /var/run/dpdk/spdk_pid62198 00:43:05.977 Removing: /var/run/dpdk/spdk_pid62307 00:43:05.977 Removing: /var/run/dpdk/spdk_pid62422 00:43:05.977 Removing: /var/run/dpdk/spdk_pid62486 00:43:05.977 Removing: /var/run/dpdk/spdk_pid62517 00:43:05.977 Removing: /var/run/dpdk/spdk_pid62601 00:43:05.977 Removing: /var/run/dpdk/spdk_pid63250 00:43:05.977 Removing: /var/run/dpdk/spdk_pid63298 00:43:05.977 Removing: /var/run/dpdk/spdk_pid63827 00:43:05.977 Removing: /var/run/dpdk/spdk_pid63937 00:43:05.977 Removing: /var/run/dpdk/spdk_pid64063 00:43:05.977 Removing: /var/run/dpdk/spdk_pid64118 00:43:05.977 Removing: /var/run/dpdk/spdk_pid64149 00:43:05.977 Removing: /var/run/dpdk/spdk_pid64180 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66091 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66240 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66250 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66264 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66319 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66323 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66335 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66380 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66384 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66396 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66446 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66450 00:43:05.977 Removing: /var/run/dpdk/spdk_pid66468 00:43:05.977 Removing: /var/run/dpdk/spdk_pid67849 00:43:05.977 Removing: /var/run/dpdk/spdk_pid67968 00:43:05.977 Removing: /var/run/dpdk/spdk_pid69401 00:43:05.977 Removing: /var/run/dpdk/spdk_pid70787 00:43:05.977 Removing: /var/run/dpdk/spdk_pid70919 00:43:05.977 Removing: /var/run/dpdk/spdk_pid71036 00:43:05.977 Removing: /var/run/dpdk/spdk_pid71164 00:43:05.977 Removing: /var/run/dpdk/spdk_pid71302 00:43:06.236 Removing: /var/run/dpdk/spdk_pid71383 00:43:06.236 Removing: /var/run/dpdk/spdk_pid71536 00:43:06.236 Removing: /var/run/dpdk/spdk_pid71912 00:43:06.236 Removing: /var/run/dpdk/spdk_pid71960 00:43:06.236 Removing: /var/run/dpdk/spdk_pid72450 00:43:06.236 Removing: /var/run/dpdk/spdk_pid72641 00:43:06.236 Removing: /var/run/dpdk/spdk_pid72741 00:43:06.236 Removing: /var/run/dpdk/spdk_pid72862 00:43:06.236 Removing: /var/run/dpdk/spdk_pid72921 00:43:06.236 Removing: /var/run/dpdk/spdk_pid72952 00:43:06.236 Removing: /var/run/dpdk/spdk_pid73245 00:43:06.236 Removing: /var/run/dpdk/spdk_pid73316 00:43:06.236 Removing: /var/run/dpdk/spdk_pid73412 00:43:06.236 Removing: /var/run/dpdk/spdk_pid73848 00:43:06.236 Removing: /var/run/dpdk/spdk_pid73993 00:43:06.236 Removing: /var/run/dpdk/spdk_pid74799 00:43:06.236 Removing: /var/run/dpdk/spdk_pid74948 00:43:06.236 Removing: /var/run/dpdk/spdk_pid75152 00:43:06.236 Removing: /var/run/dpdk/spdk_pid75260 00:43:06.236 Removing: /var/run/dpdk/spdk_pid75609 00:43:06.236 Removing: /var/run/dpdk/spdk_pid75879 00:43:06.236 Removing: /var/run/dpdk/spdk_pid76247 00:43:06.236 Removing: /var/run/dpdk/spdk_pid76458 00:43:06.236 Removing: /var/run/dpdk/spdk_pid76573 00:43:06.236 Removing: /var/run/dpdk/spdk_pid76648 00:43:06.236 Removing: /var/run/dpdk/spdk_pid76775 00:43:06.236 Removing: /var/run/dpdk/spdk_pid76811 00:43:06.236 Removing: /var/run/dpdk/spdk_pid76879 00:43:06.236 Removing: /var/run/dpdk/spdk_pid77083 00:43:06.236 Removing: /var/run/dpdk/spdk_pid77347 00:43:06.236 Removing: /var/run/dpdk/spdk_pid77722 00:43:06.236 Removing: /var/run/dpdk/spdk_pid78103 00:43:06.236 Removing: /var/run/dpdk/spdk_pid78468 00:43:06.236 Removing: /var/run/dpdk/spdk_pid78902 00:43:06.236 Removing: /var/run/dpdk/spdk_pid79044 00:43:06.236 Removing: /var/run/dpdk/spdk_pid79142 00:43:06.236 Removing: /var/run/dpdk/spdk_pid79746 00:43:06.236 Removing: /var/run/dpdk/spdk_pid79832 00:43:06.236 Removing: /var/run/dpdk/spdk_pid80221 00:43:06.236 Removing: /var/run/dpdk/spdk_pid80590 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81036 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81161 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81219 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81293 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81356 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81431 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81653 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81723 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81796 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81874 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81909 00:43:06.236 Removing: /var/run/dpdk/spdk_pid81986 00:43:06.236 Removing: /var/run/dpdk/spdk_pid82123 00:43:06.236 Clean 00:43:06.495 07:40:30 -- common/autotest_common.sh@1453 -- # return 0 00:43:06.495 07:40:30 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:06.495 07:40:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:06.495 07:40:30 -- common/autotest_common.sh@10 -- # set +x 00:43:06.495 07:40:30 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:06.495 07:40:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:06.495 07:40:30 -- common/autotest_common.sh@10 -- # set +x 00:43:06.495 07:40:30 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:06.495 07:40:30 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:43:06.495 07:40:30 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:43:06.495 07:40:30 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:06.495 07:40:30 -- spdk/autotest.sh@398 -- # hostname 00:43:06.495 07:40:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:43:06.755 geninfo: WARNING: invalid characters removed from testname! 00:43:38.858 07:40:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:39.425 07:41:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:42.710 07:41:06 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:44.639 07:41:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:47.168 07:41:11 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:49.702 07:41:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:43:52.235 07:41:15 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:52.235 07:41:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:52.235 07:41:15 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:43:52.235 07:41:15 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:52.235 07:41:15 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:52.235 07:41:15 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:52.235 + [[ -n 5301 ]] 00:43:52.235 + sudo kill 5301 00:43:52.244 [Pipeline] } 00:43:52.260 [Pipeline] // timeout 00:43:52.265 [Pipeline] } 00:43:52.281 [Pipeline] // stage 00:43:52.287 [Pipeline] } 00:43:52.303 [Pipeline] // catchError 00:43:52.313 [Pipeline] stage 00:43:52.316 [Pipeline] { (Stop VM) 00:43:52.330 [Pipeline] sh 00:43:52.614 + vagrant halt 00:43:56.801 ==> default: Halting domain... 00:44:03.399 [Pipeline] sh 00:44:03.679 + vagrant destroy -f 00:44:07.870 ==> default: Removing domain... 00:44:08.140 [Pipeline] sh 00:44:08.420 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:44:08.428 [Pipeline] } 00:44:08.441 [Pipeline] // stage 00:44:08.446 [Pipeline] } 00:44:08.459 [Pipeline] // dir 00:44:08.464 [Pipeline] } 00:44:08.479 [Pipeline] // wrap 00:44:08.485 [Pipeline] } 00:44:08.497 [Pipeline] // catchError 00:44:08.506 [Pipeline] stage 00:44:08.508 [Pipeline] { (Epilogue) 00:44:08.521 [Pipeline] sh 00:44:08.802 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:15.375 [Pipeline] catchError 00:44:15.377 [Pipeline] { 00:44:15.386 [Pipeline] sh 00:44:15.663 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:15.922 Artifacts sizes are good 00:44:15.931 [Pipeline] } 00:44:15.943 [Pipeline] // catchError 00:44:15.954 [Pipeline] archiveArtifacts 00:44:15.961 Archiving artifacts 00:44:16.075 [Pipeline] cleanWs 00:44:16.084 [WS-CLEANUP] Deleting project workspace... 00:44:16.084 [WS-CLEANUP] Deferred wipeout is used... 00:44:16.090 [WS-CLEANUP] done 00:44:16.093 [Pipeline] } 00:44:16.107 [Pipeline] // stage 00:44:16.111 [Pipeline] } 00:44:16.123 [Pipeline] // node 00:44:16.128 [Pipeline] End of Pipeline 00:44:16.164 Finished: SUCCESS